Test Report: Docker_Linux_crio_arm64 21866

                    
                      77bc04e31513dc44a023e1d185fd1b44f1864364:2025-11-08:42249
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.78
35 TestAddons/parallel/Registry 16.7
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 144.57
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 41.25
42 TestAddons/parallel/Headlamp 3.13
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.45
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.89
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
128 TestFunctional/parallel/ServiceCmd/DeployApp 600.86
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
147 TestFunctional/parallel/ServiceCmd/Format 0.42
148 TestFunctional/parallel/ServiceCmd/URL 0.39
191 TestJSONOutput/pause/Command 1.86
197 TestJSONOutput/unpause/Command 1.69
292 TestPause/serial/Pause 8.25
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.54
303 TestStartStop/group/old-k8s-version/serial/Pause 7.05
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.57
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.4
321 TestStartStop/group/no-preload/serial/Pause 8.23
327 TestStartStop/group/embed-certs/serial/Pause 6.74
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.59
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.2
341 TestStartStop/group/newest-cni/serial/Pause 7.09
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.06
x
+
TestAddons/serial/Volcano (0.78s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable volcano --alsologtostderr -v=1: exit status 11 (783.325052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:15:56.861071  300738 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:15:56.861945  300738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:15:56.861979  300738 out.go:374] Setting ErrFile to fd 2...
	I1108 09:15:56.862001  300738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:15:56.862327  300738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:15:56.862690  300738 mustload.go:66] Loading cluster: addons-461635
	I1108 09:15:56.863070  300738 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:15:56.863087  300738 addons.go:607] checking whether the cluster is paused
	I1108 09:15:56.863194  300738 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:15:56.863208  300738 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:15:56.863672  300738 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:15:56.898086  300738 ssh_runner.go:195] Run: systemctl --version
	I1108 09:15:56.898145  300738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:15:56.916712  300738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:15:57.023567  300738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:15:57.023692  300738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:15:57.062059  300738 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:15:57.062124  300738 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:15:57.062137  300738 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:15:57.062142  300738 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:15:57.062145  300738 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:15:57.062149  300738 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:15:57.062152  300738 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:15:57.062156  300738 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:15:57.062159  300738 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:15:57.062165  300738 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:15:57.062168  300738 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:15:57.062172  300738 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:15:57.062175  300738 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:15:57.062178  300738 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:15:57.062183  300738 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:15:57.062188  300738 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:15:57.062195  300738 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:15:57.062201  300738 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:15:57.062204  300738 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:15:57.062209  300738 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:15:57.062225  300738 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:15:57.062229  300738 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:15:57.062232  300738 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:15:57.062235  300738 cri.go:89] found id: ""
	I1108 09:15:57.062286  300738 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:15:57.077873  300738 out.go:203] 
	W1108 09:15:57.080844  300738 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:15:57.080869  300738 out.go:285] * 
	* 
	W1108 09:15:57.535643  300738 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:15:57.538723  300738 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.077049ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006416891s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003957047s
addons_test.go:392: (dbg) Run:  kubectl --context addons-461635 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-461635 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-461635 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.111087549s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 ip
2025/11/08 09:16:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable registry --alsologtostderr -v=1: exit status 11 (308.034018ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:23.318166  301733 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:23.318930  301733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:23.318950  301733 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:23.318957  301733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:23.319235  301733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:23.319550  301733 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:23.319923  301733 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:23.319941  301733 addons.go:607] checking whether the cluster is paused
	I1108 09:16:23.320064  301733 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:23.320080  301733 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:23.320580  301733 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:23.343783  301733 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:23.343849  301733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:23.365140  301733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:23.480297  301733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:23.480462  301733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:23.512645  301733 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:23.512673  301733 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:23.512678  301733 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:23.512682  301733 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:23.512686  301733 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:23.512692  301733 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:23.512695  301733 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:23.512699  301733 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:23.512702  301733 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:23.512708  301733 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:23.512711  301733 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:23.512715  301733 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:23.512718  301733 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:23.512721  301733 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:23.512725  301733 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:23.512730  301733 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:23.512736  301733 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:23.512740  301733 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:23.512743  301733 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:23.512747  301733 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:23.512752  301733 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:23.512755  301733 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:23.512758  301733 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:23.512761  301733 cri.go:89] found id: ""
	I1108 09:16:23.512814  301733 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:23.528478  301733 out.go:203] 
	W1108 09:16:23.531589  301733 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:23.531612  301733 out.go:285] * 
	* 
	W1108 09:16:23.538230  301733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:23.541438  301733 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.70s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.034798ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-461635
addons_test.go:332: (dbg) Run:  kubectl --context addons-461635 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (248.681683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:18.738703  303334 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:18.739460  303334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:18.739476  303334 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:18.739482  303334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:18.739786  303334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:17:18.740145  303334 mustload.go:66] Loading cluster: addons-461635
	I1108 09:17:18.740562  303334 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:18.740583  303334 addons.go:607] checking whether the cluster is paused
	I1108 09:17:18.740728  303334 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:18.740745  303334 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:17:18.741302  303334 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:17:18.758251  303334 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:18.758328  303334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:17:18.774619  303334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:17:18.879441  303334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:18.879522  303334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:18.908681  303334 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:17:18.908700  303334 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:17:18.908704  303334 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:17:18.908709  303334 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:17:18.908712  303334 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:17:18.908716  303334 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:17:18.908725  303334 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:17:18.908728  303334 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:17:18.908732  303334 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:17:18.908739  303334 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:17:18.908742  303334 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:17:18.908746  303334 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:17:18.908749  303334 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:17:18.908752  303334 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:17:18.908756  303334 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:17:18.908761  303334 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:17:18.908764  303334 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:17:18.908768  303334 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:17:18.908772  303334 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:17:18.908775  303334 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:17:18.908780  303334 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:17:18.908783  303334 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:17:18.908786  303334 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:17:18.908789  303334 cri.go:89] found id: ""
	I1108 09:17:18.908837  303334 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:18.923531  303334 out.go:203] 
	W1108 09:17:18.926571  303334 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:18.926599  303334 out.go:285] * 
	* 
	W1108 09:17:18.932959  303334 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:18.935931  303334 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-461635 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-461635 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-461635 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b4817465-48ad-4cbf-a50e-a4c8a71d3899] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b4817465-48ad-4cbf-a50e-a4c8a71d3899] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006197636s
I1108 09:16:44.866197  294085 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.933352178s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-461635 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-461635
helpers_test.go:243: (dbg) docker inspect addons-461635:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6",
	        "Created": "2025-11-08T09:13:40.933160298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295293,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:13:40.995915601Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/hosts",
	        "LogPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6-json.log",
	        "Name": "/addons-461635",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-461635:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-461635",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6",
	                "LowerDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-461635",
	                "Source": "/var/lib/docker/volumes/addons-461635/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-461635",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-461635",
	                "name.minikube.sigs.k8s.io": "addons-461635",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5883474581e350ae2eca52ea9cc7173a14c2c0663e9df326d2d633cf44ed877",
	            "SandboxKey": "/var/run/docker/netns/f5883474581e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-461635": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:5a:45:39:05:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9daea876ae108ba25eea3cd32aa706b2fe54f1ae544f9d17ff1eb4b284d4fe68",
	                    "EndpointID": "b206692f7ad9f0edde23ceda1d22bcc170384cd340464d5f4cedbf521a0571c2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-461635",
	                        "2c24103c57a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-461635 -n addons-461635
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-461635 logs -n 25: (1.45413523s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-036976                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-036976 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ start   │ --download-only -p binary-mirror-382750 --alsologtostderr --binary-mirror http://127.0.0.1:38109 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-382750   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ delete  │ -p binary-mirror-382750                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-382750   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ addons  │ enable dashboard -p addons-461635                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-461635                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ start   │ -p addons-461635 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:15 UTC │
	│ addons  │ addons-461635 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-461635 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-461635 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ip      │ addons-461635 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ addons-461635 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ addons-461635 ssh cat /opt/local-path-provisioner/pvc-aed32540-d952-4f4f-87bc-ef0c1030256d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ addons-461635 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ addons-461635 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-461635 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ addons  │ addons-461635 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ addons  │ addons-461635 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-461635                                                                                                                                                                                                                                                                                                                                                                                           │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ addons-461635 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ ip      │ addons-461635 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:13:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:13:14.929234  294890 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:13:14.929369  294890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:14.929378  294890 out.go:374] Setting ErrFile to fd 2...
	I1108 09:13:14.929384  294890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:14.929650  294890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:13:14.930088  294890 out.go:368] Setting JSON to false
	I1108 09:13:14.930917  294890 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6944,"bootTime":1762586251,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:13:14.930984  294890 start.go:143] virtualization:  
	I1108 09:13:14.934350  294890 out.go:179] * [addons-461635] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:13:14.938196  294890 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:13:14.938351  294890 notify.go:221] Checking for updates...
	I1108 09:13:14.944066  294890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:13:14.946995  294890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:13:14.949850  294890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:13:14.952957  294890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:13:14.955817  294890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:13:14.958875  294890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:13:14.989611  294890 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:13:14.989790  294890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:15.081610  294890 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:15.071746751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:15.081719  294890 docker.go:319] overlay module found
	I1108 09:13:15.085004  294890 out.go:179] * Using the docker driver based on user configuration
	I1108 09:13:15.087840  294890 start.go:309] selected driver: docker
	I1108 09:13:15.087859  294890 start.go:930] validating driver "docker" against <nil>
	I1108 09:13:15.087881  294890 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:13:15.088676  294890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:15.149384  294890 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:15.139924792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:15.149544  294890 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:13:15.149784  294890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:13:15.152707  294890 out.go:179] * Using Docker driver with root privileges
	I1108 09:13:15.155515  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:13:15.155600  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:13:15.155615  294890 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:13:15.155706  294890 start.go:353] cluster config:
	{Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:13:15.158837  294890 out.go:179] * Starting "addons-461635" primary control-plane node in "addons-461635" cluster
	I1108 09:13:15.161557  294890 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:13:15.164531  294890 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:13:15.167397  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:15.167432  294890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:13:15.167451  294890 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:13:15.167461  294890 cache.go:59] Caching tarball of preloaded images
	I1108 09:13:15.167552  294890 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 09:13:15.167562  294890 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:13:15.167896  294890 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json ...
	I1108 09:13:15.167915  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json: {Name:mk80158965353712057df83f45f11f645e406d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:15.184841  294890 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:13:15.185014  294890 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:13:15.185041  294890 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:13:15.185047  294890 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:13:15.185066  294890 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:13:15.185079  294890 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:13:32.985397  294890 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:13:32.985435  294890 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:13:32.985466  294890 start.go:360] acquireMachinesLock for addons-461635: {Name:mk5ac93816e32ad490db32cd4a09ffd11e3e098c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:13:32.986194  294890 start.go:364] duration metric: took 699.995µs to acquireMachinesLock for "addons-461635"
	I1108 09:13:32.986234  294890 start.go:93] Provisioning new machine with config: &{Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:13:32.986330  294890 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:13:32.989823  294890 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:13:32.990062  294890 start.go:159] libmachine.API.Create for "addons-461635" (driver="docker")
	I1108 09:13:32.990102  294890 client.go:173] LocalClient.Create starting
	I1108 09:13:32.990235  294890 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 09:13:33.138094  294890 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 09:13:34.089145  294890 cli_runner.go:164] Run: docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:13:34.107576  294890 cli_runner.go:211] docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:13:34.107673  294890 network_create.go:284] running [docker network inspect addons-461635] to gather additional debugging logs...
	I1108 09:13:34.107694  294890 cli_runner.go:164] Run: docker network inspect addons-461635
	W1108 09:13:34.125389  294890 cli_runner.go:211] docker network inspect addons-461635 returned with exit code 1
	I1108 09:13:34.125443  294890 network_create.go:287] error running [docker network inspect addons-461635]: docker network inspect addons-461635: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-461635 not found
	I1108 09:13:34.125461  294890 network_create.go:289] output of [docker network inspect addons-461635]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-461635 not found
	
	** /stderr **
	I1108 09:13:34.125560  294890 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:13:34.141608  294890 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001969980}
	I1108 09:13:34.141659  294890 network_create.go:124] attempt to create docker network addons-461635 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:13:34.141714  294890 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-461635 addons-461635
	I1108 09:13:34.196256  294890 network_create.go:108] docker network addons-461635 192.168.49.0/24 created
	I1108 09:13:34.196287  294890 kic.go:121] calculated static IP "192.168.49.2" for the "addons-461635" container
	I1108 09:13:34.196368  294890 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:13:34.211553  294890 cli_runner.go:164] Run: docker volume create addons-461635 --label name.minikube.sigs.k8s.io=addons-461635 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:13:34.228706  294890 oci.go:103] Successfully created a docker volume addons-461635
	I1108 09:13:34.228792  294890 cli_runner.go:164] Run: docker run --rm --name addons-461635-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --entrypoint /usr/bin/test -v addons-461635:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:13:36.440449  294890 cli_runner.go:217] Completed: docker run --rm --name addons-461635-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --entrypoint /usr/bin/test -v addons-461635:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.211616396s)
	I1108 09:13:36.440478  294890 oci.go:107] Successfully prepared a docker volume addons-461635
	I1108 09:13:36.440506  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:36.440525  294890 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:13:36.440601  294890 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-461635:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:13:40.853645  294890 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-461635:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413006013s)
	I1108 09:13:40.853676  294890 kic.go:203] duration metric: took 4.413147816s to extract preloaded images to volume ...
	W1108 09:13:40.853842  294890 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 09:13:40.853960  294890 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:13:40.918002  294890 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-461635 --name addons-461635 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-461635 --network addons-461635 --ip 192.168.49.2 --volume addons-461635:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:13:41.234823  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Running}}
	I1108 09:13:41.254821  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.278537  294890 cli_runner.go:164] Run: docker exec addons-461635 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:13:41.336856  294890 oci.go:144] the created container "addons-461635" has a running status.
	I1108 09:13:41.336887  294890 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa...
	I1108 09:13:41.803752  294890 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:13:41.822032  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.837896  294890 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:13:41.837918  294890 kic_runner.go:114] Args: [docker exec --privileged addons-461635 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:13:41.883355  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.900520  294890 machine.go:94] provisionDockerMachine start ...
	I1108 09:13:41.900633  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:41.918412  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:41.918752  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:41.918770  294890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:13:41.919376  294890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45250->127.0.0.1:33138: read: connection reset by peer
	I1108 09:13:45.090458  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-461635
	
	I1108 09:13:45.090483  294890 ubuntu.go:182] provisioning hostname "addons-461635"
	I1108 09:13:45.090559  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.120410  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.120779  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.120798  294890 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-461635 && echo "addons-461635" | sudo tee /etc/hostname
	I1108 09:13:45.314549  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-461635
	
	I1108 09:13:45.314722  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.338072  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.338650  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.338705  294890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-461635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-461635/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-461635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:13:45.497186  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:13:45.497212  294890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 09:13:45.497233  294890 ubuntu.go:190] setting up certificates
	I1108 09:13:45.497264  294890 provision.go:84] configureAuth start
	I1108 09:13:45.497347  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:45.515025  294890 provision.go:143] copyHostCerts
	I1108 09:13:45.515113  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 09:13:45.515243  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 09:13:45.515316  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 09:13:45.515381  294890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.addons-461635 san=[127.0.0.1 192.168.49.2 addons-461635 localhost minikube]
	I1108 09:13:45.791521  294890 provision.go:177] copyRemoteCerts
	I1108 09:13:45.791591  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:13:45.791631  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.809323  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:45.912701  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:13:45.930370  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:13:45.947576  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:13:45.964966  294890 provision.go:87] duration metric: took 467.682646ms to configureAuth
	I1108 09:13:45.964991  294890 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:13:45.965177  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:13:45.965282  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.982156  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.982464  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.982485  294890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:13:46.238850  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:13:46.238933  294890 machine.go:97] duration metric: took 4.338381809s to provisionDockerMachine
	I1108 09:13:46.238962  294890 client.go:176] duration metric: took 13.248846703s to LocalClient.Create
	I1108 09:13:46.239003  294890 start.go:167] duration metric: took 13.248940308s to libmachine.API.Create "addons-461635"
	I1108 09:13:46.239025  294890 start.go:293] postStartSetup for "addons-461635" (driver="docker")
	I1108 09:13:46.239057  294890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:13:46.239145  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:13:46.239229  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.257122  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.360876  294890 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:13:46.364173  294890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:13:46.364203  294890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:13:46.364215  294890 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 09:13:46.364281  294890 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 09:13:46.364307  294890 start.go:296] duration metric: took 125.254761ms for postStartSetup
	I1108 09:13:46.364635  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:46.381054  294890 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json ...
	I1108 09:13:46.381334  294890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:13:46.381386  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.397798  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.497721  294890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:13:46.502150  294890 start.go:128] duration metric: took 13.515804486s to createHost
	I1108 09:13:46.502176  294890 start.go:83] releasing machines lock for "addons-461635", held for 13.515958745s
	I1108 09:13:46.502246  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:46.522492  294890 ssh_runner.go:195] Run: cat /version.json
	I1108 09:13:46.522519  294890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:13:46.522546  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.522585  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.545834  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.546255  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.648535  294890 ssh_runner.go:195] Run: systemctl --version
	I1108 09:13:46.782919  294890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:13:46.819424  294890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:13:46.823688  294890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:13:46.823760  294890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:13:46.852866  294890 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 09:13:46.852888  294890 start.go:496] detecting cgroup driver to use...
	I1108 09:13:46.852936  294890 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 09:13:46.852990  294890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:13:46.869780  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:13:46.882426  294890 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:13:46.882493  294890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:13:46.900220  294890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:13:46.919128  294890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:13:47.029594  294890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:13:47.150541  294890 docker.go:234] disabling docker service ...
	I1108 09:13:47.150613  294890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:13:47.171731  294890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:13:47.183828  294890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:13:47.290891  294890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:13:47.400677  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:13:47.412720  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:13:47.426053  294890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:13:47.426117  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.434208  294890 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:13:47.434273  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.442685  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.450913  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.458929  294890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:13:47.466510  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.474736  294890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.487870  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.496370  294890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:13:47.503634  294890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:13:47.511076  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:13:47.619106  294890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:13:47.735018  294890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:13:47.735107  294890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:13:47.738959  294890 start.go:564] Will wait 60s for crictl version
	I1108 09:13:47.739025  294890 ssh_runner.go:195] Run: which crictl
	I1108 09:13:47.742161  294890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:13:47.765154  294890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:13:47.765297  294890 ssh_runner.go:195] Run: crio --version
	I1108 09:13:47.793309  294890 ssh_runner.go:195] Run: crio --version
	I1108 09:13:47.829428  294890 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:13:47.832365  294890 cli_runner.go:164] Run: docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:13:47.848872  294890 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:13:47.852702  294890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:13:47.862228  294890 kubeadm.go:884] updating cluster {Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:13:47.862347  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:47.862406  294890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:13:47.896897  294890 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:13:47.896951  294890 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:13:47.897009  294890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:13:47.922027  294890 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:13:47.922048  294890 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:13:47.922056  294890 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:13:47.922146  294890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-461635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:13:47.922231  294890 ssh_runner.go:195] Run: crio config
	I1108 09:13:47.993794  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:13:47.993819  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:13:47.993835  294890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:13:47.993860  294890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-461635 NodeName:addons-461635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:13:47.993990  294890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-461635"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:13:47.994065  294890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:13:48.002817  294890 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:13:48.002906  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:13:48.012417  294890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:13:48.027383  294890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:13:48.041363  294890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:13:48.055171  294890 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:13:48.058920  294890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:13:48.068978  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:13:48.176174  294890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:13:48.191871  294890 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635 for IP: 192.168.49.2
	I1108 09:13:48.191941  294890 certs.go:195] generating shared ca certs ...
	I1108 09:13:48.191973  294890 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.192150  294890 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 09:13:48.544547  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt ...
	I1108 09:13:48.544580  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt: {Name:mke8c25306173191bbb978cc6b31777620639408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.545376  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key ...
	I1108 09:13:48.545397  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key: {Name:mkc48658a22731476e821f52cd5e14ba7058b5b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.545535  294890 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 09:13:48.726279  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt ...
	I1108 09:13:48.726308  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt: {Name:mka45cedd4150e66b2aea13b1729389e2dff3937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.726488  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key ...
	I1108 09:13:48.726503  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key: {Name:mkfd18642b86eb3301c865accec77a9eec51dea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.726584  294890 certs.go:257] generating profile certs ...
	I1108 09:13:48.726650  294890 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key
	I1108 09:13:48.726669  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt with IP's: []
	I1108 09:13:49.194093  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt ...
	I1108 09:13:49.194125  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: {Name:mk5e8f185890f69ee75504fd11f70ef4a8cb1585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.194321  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key ...
	I1108 09:13:49.194336  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key: {Name:mk1721b65dea7348aad0517764302f1f8a3d0be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.195100  294890 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1
	I1108 09:13:49.195125  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:13:49.370163  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 ...
	I1108 09:13:49.370190  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1: {Name:mkb0741c9eda29b989f745dd1aab0e87f7499d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.370358  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1 ...
	I1108 09:13:49.370372  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1: {Name:mk183102dcd9a2b367f13b3d268a66590afcd934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.370456  294890 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt
	I1108 09:13:49.370531  294890 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key
	I1108 09:13:49.370592  294890 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key
	I1108 09:13:49.370611  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt with IP's: []
	I1108 09:13:50.074708  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt ...
	I1108 09:13:50.074741  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt: {Name:mk18c109963015c3ea7a23f35f9df2d631cbb402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:50.074955  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key ...
	I1108 09:13:50.074973  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key: {Name:mke729483fdd3d313e213d9507bf0dcd52c2aa18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:50.075900  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:13:50.075947  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:13:50.075978  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:13:50.076010  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 09:13:50.076601  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:13:50.096397  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:13:50.116365  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:13:50.134588  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 09:13:50.153870  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:13:50.172313  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:13:50.189844  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:13:50.207211  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:13:50.224509  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:13:50.241558  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:13:50.254755  294890 ssh_runner.go:195] Run: openssl version
	I1108 09:13:50.260967  294890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:13:50.269262  294890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.272804  294890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.272975  294890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.317744  294890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:13:50.326186  294890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:13:50.329588  294890 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:13:50.329637  294890 kubeadm.go:401] StartCluster: {Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:13:50.329710  294890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:13:50.329764  294890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:13:50.355696  294890 cri.go:89] found id: ""
	I1108 09:13:50.355773  294890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:13:50.363383  294890 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:13:50.371169  294890 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:13:50.371235  294890 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:13:50.378840  294890 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:13:50.378861  294890 kubeadm.go:158] found existing configuration files:
	
	I1108 09:13:50.378937  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:13:50.386377  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:13:50.386442  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:13:50.393811  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:13:50.401514  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:13:50.401601  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:13:50.408867  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:13:50.416470  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:13:50.416575  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:13:50.423734  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:13:50.431967  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:13:50.432031  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:13:50.439200  294890 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:13:50.529021  294890 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 09:13:50.529358  294890 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 09:13:50.605892  294890 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:14:07.834534  294890 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:14:07.834610  294890 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:14:07.834735  294890 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:14:07.834804  294890 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 09:14:07.834853  294890 kubeadm.go:319] OS: Linux
	I1108 09:14:07.834918  294890 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:14:07.834990  294890 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 09:14:07.835045  294890 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:14:07.835109  294890 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:14:07.835179  294890 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:14:07.835257  294890 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:14:07.835307  294890 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:14:07.835358  294890 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:14:07.835430  294890 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 09:14:07.835514  294890 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:14:07.835613  294890 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:14:07.835707  294890 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:14:07.835772  294890 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:14:07.838791  294890 out.go:252]   - Generating certificates and keys ...
	I1108 09:14:07.838887  294890 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:14:07.838961  294890 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:14:07.839040  294890 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:14:07.839105  294890 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:14:07.839174  294890 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:14:07.839231  294890 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:14:07.839292  294890 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:14:07.839417  294890 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-461635 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:14:07.839496  294890 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:14:07.839622  294890 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-461635 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:14:07.839693  294890 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:14:07.839763  294890 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:14:07.839814  294890 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:14:07.839878  294890 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:14:07.839934  294890 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:14:07.839998  294890 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:14:07.840062  294890 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:14:07.840134  294890 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:14:07.840200  294890 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:14:07.840289  294890 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:14:07.840377  294890 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:14:07.843480  294890 out.go:252]   - Booting up control plane ...
	I1108 09:14:07.843598  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:14:07.843684  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:14:07.843761  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:14:07.843878  294890 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:14:07.843999  294890 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:14:07.844116  294890 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:14:07.844211  294890 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:14:07.844255  294890 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:14:07.844435  294890 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:14:07.844578  294890 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:14:07.844651  294890 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501767819s
	I1108 09:14:07.844791  294890 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:14:07.844947  294890 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:14:07.845058  294890 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:14:07.845165  294890 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:14:07.845295  294890 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.719539162s
	I1108 09:14:07.845371  294890 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.297685716s
	I1108 09:14:07.845449  294890 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502237044s
	I1108 09:14:07.845606  294890 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:14:07.845806  294890 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:14:07.845921  294890 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:14:07.846169  294890 kubeadm.go:319] [mark-control-plane] Marking the node addons-461635 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:14:07.846248  294890 kubeadm.go:319] [bootstrap-token] Using token: 29waul.3t39uxcwk9pz3oyr
	I1108 09:14:07.851141  294890 out.go:252]   - Configuring RBAC rules ...
	I1108 09:14:07.851302  294890 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:14:07.851405  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:14:07.851556  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:14:07.851699  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:14:07.851825  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:14:07.851920  294890 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:14:07.852043  294890 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:14:07.852092  294890 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:14:07.852145  294890 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:14:07.852154  294890 kubeadm.go:319] 
	I1108 09:14:07.852217  294890 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:14:07.852224  294890 kubeadm.go:319] 
	I1108 09:14:07.852305  294890 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:14:07.852315  294890 kubeadm.go:319] 
	I1108 09:14:07.852348  294890 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:14:07.852411  294890 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:14:07.852470  294890 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:14:07.852480  294890 kubeadm.go:319] 
	I1108 09:14:07.852539  294890 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:14:07.852546  294890 kubeadm.go:319] 
	I1108 09:14:07.852596  294890 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:14:07.852600  294890 kubeadm.go:319] 
	I1108 09:14:07.852655  294890 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:14:07.852732  294890 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:14:07.852803  294890 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:14:07.852808  294890 kubeadm.go:319] 
	I1108 09:14:07.852897  294890 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:14:07.853109  294890 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:14:07.853118  294890 kubeadm.go:319] 
	I1108 09:14:07.853206  294890 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 29waul.3t39uxcwk9pz3oyr \
	I1108 09:14:07.853314  294890 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 09:14:07.853337  294890 kubeadm.go:319] 	--control-plane 
	I1108 09:14:07.853342  294890 kubeadm.go:319] 
	I1108 09:14:07.853431  294890 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:14:07.853435  294890 kubeadm.go:319] 
	I1108 09:14:07.853521  294890 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 29waul.3t39uxcwk9pz3oyr \
	I1108 09:14:07.853644  294890 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 09:14:07.853653  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:14:07.853660  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:14:07.856683  294890 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:14:07.859701  294890 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:14:07.863721  294890 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:14:07.863791  294890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:14:07.877859  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:14:08.161168  294890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:14:08.161401  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:08.161527  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-461635 minikube.k8s.io/updated_at=2025_11_08T09_14_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=addons-461635 minikube.k8s.io/primary=true
	I1108 09:14:08.295981  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:08.296038  294890 ops.go:34] apiserver oom_adj: -16
	I1108 09:14:08.796202  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:09.297030  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:09.796782  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:10.296722  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:10.797047  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:11.296832  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:11.796746  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:12.296104  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:12.396381  294890 kubeadm.go:1114] duration metric: took 4.235043126s to wait for elevateKubeSystemPrivileges
	I1108 09:14:12.396408  294890 kubeadm.go:403] duration metric: took 22.066773896s to StartCluster
	I1108 09:14:12.396424  294890 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:14:12.396539  294890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:14:12.397010  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:14:12.397217  294890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:14:12.397356  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:14:12.397599  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:12.397628  294890 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:14:12.397720  294890 addons.go:70] Setting yakd=true in profile "addons-461635"
	I1108 09:14:12.397738  294890 addons.go:239] Setting addon yakd=true in "addons-461635"
	I1108 09:14:12.397761  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.398225  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.398547  294890 addons.go:70] Setting inspektor-gadget=true in profile "addons-461635"
	I1108 09:14:12.398564  294890 addons.go:239] Setting addon inspektor-gadget=true in "addons-461635"
	I1108 09:14:12.398586  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.399005  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.399598  294890 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-461635"
	I1108 09:14:12.399626  294890 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-461635"
	I1108 09:14:12.399664  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.400226  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.400531  294890 addons.go:70] Setting metrics-server=true in profile "addons-461635"
	I1108 09:14:12.400557  294890 addons.go:239] Setting addon metrics-server=true in "addons-461635"
	I1108 09:14:12.400581  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.401075  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.407197  294890 addons.go:70] Setting cloud-spanner=true in profile "addons-461635"
	I1108 09:14:12.407236  294890 addons.go:239] Setting addon cloud-spanner=true in "addons-461635"
	I1108 09:14:12.407269  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.407733  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.407869  294890 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-461635"
	I1108 09:14:12.407889  294890 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-461635"
	I1108 09:14:12.407909  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.408297  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.413862  294890 addons.go:70] Setting registry=true in profile "addons-461635"
	I1108 09:14:12.413899  294890 addons.go:239] Setting addon registry=true in "addons-461635"
	I1108 09:14:12.413934  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.414401  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.424582  294890 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-461635"
	I1108 09:14:12.424702  294890 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-461635"
	I1108 09:14:12.424761  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.425300  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.434821  294890 addons.go:70] Setting default-storageclass=true in profile "addons-461635"
	I1108 09:14:12.434865  294890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-461635"
	I1108 09:14:12.435217  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.454071  294890 addons.go:70] Setting registry-creds=true in profile "addons-461635"
	I1108 09:14:12.454279  294890 addons.go:70] Setting gcp-auth=true in profile "addons-461635"
	I1108 09:14:12.454306  294890 mustload.go:66] Loading cluster: addons-461635
	I1108 09:14:12.454531  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:12.454806  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.460318  294890 addons.go:239] Setting addon registry-creds=true in "addons-461635"
	I1108 09:14:12.460378  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.464547  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.487129  294890 addons.go:70] Setting storage-provisioner=true in profile "addons-461635"
	I1108 09:14:12.487187  294890 addons.go:239] Setting addon storage-provisioner=true in "addons-461635"
	I1108 09:14:12.487240  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.487863  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.495370  294890 addons.go:70] Setting ingress=true in profile "addons-461635"
	I1108 09:14:12.495414  294890 addons.go:239] Setting addon ingress=true in "addons-461635"
	I1108 09:14:12.542275  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.542864  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.558795  294890 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:14:12.562259  294890 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:14:12.562282  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:14:12.562356  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.501288  294890 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-461635"
	I1108 09:14:12.576923  294890 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-461635"
	I1108 09:14:12.577302  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.501301  294890 addons.go:70] Setting volcano=true in profile "addons-461635"
	I1108 09:14:12.592223  294890 addons.go:239] Setting addon volcano=true in "addons-461635"
	I1108 09:14:12.592266  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.592754  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.501308  294890 addons.go:70] Setting volumesnapshots=true in profile "addons-461635"
	I1108 09:14:12.619089  294890 addons.go:239] Setting addon volumesnapshots=true in "addons-461635"
	I1108 09:14:12.619185  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.619794  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.632862  294890 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:14:12.636123  294890 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:14:12.636167  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:14:12.636258  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.652066  294890 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:14:12.652276  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:14:12.501508  294890 out.go:179] * Verifying Kubernetes components...
	I1108 09:14:12.510962  294890 addons.go:70] Setting ingress-dns=true in profile "addons-461635"
	I1108 09:14:12.654440  294890 addons.go:239] Setting addon default-storageclass=true in "addons-461635"
	I1108 09:14:12.654487  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.655053  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.706758  294890 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:14:12.706781  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:14:12.706857  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.712461  294890 addons.go:239] Setting addon ingress-dns=true in "addons-461635"
	I1108 09:14:12.712541  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.713247  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.733996  294890 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:14:12.737011  294890 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:14:12.741036  294890 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:14:12.741152  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.745181  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:14:12.745208  294890 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:14:12.745322  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.759877  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:14:12.760009  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:14:12.741083  294890 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:14:12.737079  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:14:12.795607  294890 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:14:12.795695  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.799495  294890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:14:12.807771  294890 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:14:12.741044  294890 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:14:12.807881  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:14:12.807943  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	W1108 09:14:12.809206  294890 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:14:12.809408  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:12.811203  294890 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-461635"
	I1108 09:14:12.811683  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.812134  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.818562  294890 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:14:12.820501  294890 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:14:12.820523  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:14:12.820656  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.829103  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:12.829544  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:12.830587  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:14:12.830762  294890 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:14:12.832633  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:14:12.832720  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.863206  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 09:14:12.863479  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:12.832015  294890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:14:12.863816  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:14:12.863889  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.889929  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1108 09:14:12.893085  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:14:12.893317  294890 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:14:12.893334  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:14:12.893398  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.919685  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:14:12.919837  294890 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:14:12.919876  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:14:12.922873  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:14:12.922906  294890 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:14:12.922979  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.927429  294890 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:14:12.927450  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:14:12.927543  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.950532  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:14:12.959227  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:14:12.963842  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:14:12.963865  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:14:12.963938  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.964212  294890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:14:12.964226  294890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:14:12.964267  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.994719  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:14:12.996139  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.007726  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.008582  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.018245  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.073628  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.074709  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.078724  294890 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:14:13.085450  294890 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:14:13.093120  294890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:14:13.093145  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:14:13.093209  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:13.101045  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.118701  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.143629  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.153956  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	W1108 09:14:13.157677  294890 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:14:13.157712  294890 retry.go:31] will retry after 223.376922ms: ssh: handshake failed: EOF
	I1108 09:14:13.159723  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.165901  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.170576  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	W1108 09:14:13.172251  294890 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:14:13.172277  294890 retry.go:31] will retry after 162.087618ms: ssh: handshake failed: EOF
	I1108 09:14:13.380567  294890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:14:13.679286  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:14:13.697935  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:14:13.734983  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:14:13.819566  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:14:13.873506  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:14:13.873596  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:14:13.938516  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:14:13.940815  294890 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:14:13.940885  294890 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:14:13.998993  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:14:13.999073  294890 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:14:14.002798  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:14:14.002874  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:14:14.015271  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:14:14.023295  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:14:14.042596  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:14:14.042672  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:14:14.072221  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:14:14.074559  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:14:14.163677  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:14:14.163754  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:14:14.164116  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:14:14.181581  294890 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:14:14.181660  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:14:14.197414  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:14:14.197490  294890 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:14:14.201711  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:14:14.201790  294890 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:14:14.204655  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:14:14.204730  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:14:14.351226  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:14:14.353986  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:14:14.354061  294890 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:14:14.355956  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:14:14.356031  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:14:14.372560  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:14:14.372638  294890 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:14:14.377907  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:14:14.377980  294890 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:14:14.512035  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:14:14.564605  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:14:14.564629  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:14:14.581185  294890 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:14.581209  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:14:14.582334  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:14:14.582355  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:14:14.698651  294890 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.703888419s)
	I1108 09:14:14.698684  294890 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:14:14.699629  294890 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318992729s)
	I1108 09:14:14.700237  294890 node_ready.go:35] waiting up to 6m0s for node "addons-461635" to be "Ready" ...
	I1108 09:14:14.768633  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:14:14.768706  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:14:14.772862  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:14.877148  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:14:14.891917  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.212579954s)
	I1108 09:14:15.085133  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:14:15.085217  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:14:15.212426  294890 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-461635" context rescaled to 1 replicas
	I1108 09:14:15.373114  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:14:15.373184  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:14:15.524334  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:14:15.524411  294890 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:14:15.721571  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:14:15.721596  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:14:15.894216  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.196195449s)
	I1108 09:14:15.894329  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.159272953s)
	I1108 09:14:15.894404  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.074765548s)
	I1108 09:14:15.894459  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.955880288s)
	I1108 09:14:15.933397  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:14:15.933424  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:14:16.074991  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:14:16.075024  294890 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:14:16.315128  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1108 09:14:16.719838  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:17.175033  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.159677897s)
	I1108 09:14:17.175185  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.151820904s)
	I1108 09:14:18.797566  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.725263969s)
	I1108 09:14:18.798058  294890 addons.go:480] Verifying addon ingress=true in "addons-461635"
	I1108 09:14:18.797701  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.723052598s)
	I1108 09:14:18.797747  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.633580225s)
	I1108 09:14:18.797772  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.446474073s)
	I1108 09:14:18.798282  294890 addons.go:480] Verifying addon registry=true in "addons-461635"
	I1108 09:14:18.797820  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.285714158s)
	I1108 09:14:18.798814  294890 addons.go:480] Verifying addon metrics-server=true in "addons-461635"
	I1108 09:14:18.797900  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.024892614s)
	W1108 09:14:18.798856  294890 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:14:18.798871  294890 retry.go:31] will retry after 238.442509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:14:18.797928  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.920703279s)
	I1108 09:14:18.802500  294890 out.go:179] * Verifying ingress addon...
	I1108 09:14:18.802504  294890 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-461635 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:14:18.802613  294890 out.go:179] * Verifying registry addon...
	I1108 09:14:18.806063  294890 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:14:18.808805  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 09:14:18.813105  294890 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:14:18.813180  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:18.816377  294890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:14:18.816445  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:19.034933  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.719707551s)
	I1108 09:14:19.035018  294890 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-461635"
	I1108 09:14:19.038078  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:19.038218  294890 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:14:19.041795  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:14:19.056702  294890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:14:19.056766  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:19.203354  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:19.310142  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:19.312490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:19.545411  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:19.809594  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:19.811428  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:20.046020  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:20.310348  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:20.312491  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:20.353753  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:14:20.353866  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:20.370773  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:20.481530  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:14:20.494880  294890 addons.go:239] Setting addon gcp-auth=true in "addons-461635"
	I1108 09:14:20.494929  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:20.495389  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:20.511927  294890 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:14:20.511978  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:20.530413  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:20.546294  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:20.809536  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:20.811549  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:21.046240  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:21.203871  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:21.311016  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:21.312028  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:21.547005  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:21.740815  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.702677796s)
	I1108 09:14:21.740885  294890 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.228939452s)
	I1108 09:14:21.743923  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:21.746692  294890 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:14:21.749553  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:14:21.749586  294890 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:14:21.763701  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:14:21.763731  294890 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:14:21.776307  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:14:21.776374  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:14:21.790143  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:14:21.809839  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:21.812245  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:22.045988  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:22.296690  294890 addons.go:480] Verifying addon gcp-auth=true in "addons-461635"
	I1108 09:14:22.300127  294890 out.go:179] * Verifying gcp-auth addon...
	I1108 09:14:22.303673  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:14:22.306595  294890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:14:22.306614  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:22.308967  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:22.311511  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:22.546917  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:22.806609  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:22.808874  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:22.811238  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:23.045560  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:23.307667  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:23.310749  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:23.311791  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:23.544677  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:23.703507  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:23.807980  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:23.809419  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:23.811700  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:24.044718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:24.306618  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:24.308822  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:24.311203  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:24.544995  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:24.807431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:24.809664  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:24.811504  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:25.044822  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:25.307049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:25.308639  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:25.312115  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:25.545352  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:25.807017  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:25.809309  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:25.811367  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:26.045474  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:26.204248  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:26.307045  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:26.308966  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:26.311105  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:26.545443  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:26.807062  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:26.808897  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:26.810947  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:27.044695  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:27.308265  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:27.310029  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:27.311716  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:27.544442  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:27.807151  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:27.808540  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:27.811827  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:28.044581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:28.307366  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:28.309174  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:28.311434  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:28.545812  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:28.703724  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:28.806352  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:28.808723  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:28.812296  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:29.045172  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:29.309635  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:29.309865  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:29.311718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:29.544631  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:29.807239  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:29.809563  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:29.811448  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:30.045927  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:30.307040  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:30.308862  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:30.311049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:30.545265  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:30.807191  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:30.809001  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:30.810877  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:31.044931  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:31.203797  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:31.307766  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:31.308686  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:31.311656  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:31.545585  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:31.807416  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:31.809445  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:31.811325  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:32.045862  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:32.307315  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:32.310261  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:32.311194  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:32.545319  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:32.807161  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:32.808801  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:32.811171  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:33.045244  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:33.307645  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:33.309525  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:33.311266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:33.549929  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:33.703770  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:33.806645  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:33.808752  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:33.812186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:34.045207  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:34.306920  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:34.308848  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:34.311291  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:34.545638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:34.807199  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:34.809357  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:34.811137  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:35.045236  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:35.308250  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:35.309550  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:35.311524  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:35.546647  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:35.806505  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:35.808639  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:35.811865  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:36.044673  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:36.203680  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:36.306644  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:36.310592  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:36.312380  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:36.545821  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:36.806621  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:36.808845  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:36.811130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:37.045148  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:37.307581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:37.310491  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:37.311440  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:37.545346  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:37.806639  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:37.808618  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:37.811999  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:38.045182  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:38.306963  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:38.309164  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:38.311279  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:38.545263  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:38.703272  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:38.807391  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:38.809998  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:38.811881  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:39.044599  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:39.307915  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:39.309498  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:39.311380  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:39.545321  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:39.806671  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:39.808738  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:39.811922  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:40.047760  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:40.306909  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:40.317237  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:40.318487  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:40.545820  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:40.703579  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:40.807748  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:40.811590  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:40.812467  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:41.045473  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:41.307962  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:41.310000  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:41.311783  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:41.544904  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:41.807064  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:41.808825  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:41.811010  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:42.045255  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:42.307475  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:42.310091  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:42.312377  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:42.545665  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:42.703971  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:42.807600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:42.810550  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:42.812821  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:43.044794  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:43.306753  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:43.309507  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:43.311650  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:43.544568  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:43.808335  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:43.810761  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:43.811867  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:44.044750  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:44.306886  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:44.309608  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:44.311348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:44.545768  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:44.806853  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:44.816360  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:44.818937  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:45.046596  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:45.203758  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:45.311348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:45.311804  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:45.314212  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:45.545142  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:45.808638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:45.809323  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:45.811469  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:46.045951  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:46.307362  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:46.309579  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:46.311605  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:46.545000  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:46.807679  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:46.809193  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:46.810942  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:47.045561  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:47.306819  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:47.309165  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:47.311720  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:47.545827  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:47.703913  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:47.806839  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:47.808981  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:47.811156  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:48.045527  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:48.310041  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:48.310054  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:48.312191  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:48.545310  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:48.807287  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:48.809859  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:48.811695  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:49.044646  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:49.307282  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:49.309613  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:49.311969  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:49.544724  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:49.807037  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:49.809511  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:49.811494  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:50.045759  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:50.204492  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:50.307604  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:50.311777  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:50.313122  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:50.545444  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:50.807342  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:50.809448  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:50.811445  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:51.045332  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:51.307596  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:51.310023  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:51.312095  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:51.545100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:51.808482  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:51.809429  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:51.812266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:52.045383  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:52.307331  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:52.309447  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:52.311872  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:52.545092  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:52.703022  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:52.807213  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:52.809543  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:52.811559  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.045826  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:53.306764  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:53.308882  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:53.311156  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.545290  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:53.822122  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.823019  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:53.823599  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:54.066249  294890 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:14:54.066275  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:54.235166  294890 node_ready.go:49] node "addons-461635" is "Ready"
	I1108 09:14:54.235196  294890 node_ready.go:38] duration metric: took 39.534933836s for node "addons-461635" to be "Ready" ...
	I1108 09:14:54.235211  294890 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:14:54.235266  294890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:14:54.293549  294890 api_server.go:72] duration metric: took 41.896301414s to wait for apiserver process to appear ...
	I1108 09:14:54.293575  294890 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:14:54.293595  294890 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:14:54.322542  294890 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:14:54.323862  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:54.324851  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:54.324963  294890 api_server.go:141] control plane version: v1.34.1
	I1108 09:14:54.324984  294890 api_server.go:131] duration metric: took 31.402508ms to wait for apiserver health ...
	I1108 09:14:54.324994  294890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:14:54.325619  294890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:14:54.325641  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:54.330846  294890 system_pods.go:59] 19 kube-system pods found
	I1108 09:14:54.330882  294890 system_pods.go:61] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.330892  294890 system_pods.go:61] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.330900  294890 system_pods.go:61] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending
	I1108 09:14:54.330905  294890 system_pods.go:61] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending
	I1108 09:14:54.330910  294890 system_pods.go:61] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.330914  294890 system_pods.go:61] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.330924  294890 system_pods.go:61] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.330931  294890 system_pods.go:61] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.330945  294890 system_pods.go:61] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.330951  294890 system_pods.go:61] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.330962  294890 system_pods.go:61] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.330969  294890 system_pods.go:61] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.330982  294890 system_pods.go:61] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.330989  294890 system_pods.go:61] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.330999  294890 system_pods.go:61] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.331004  294890 system_pods.go:61] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending
	I1108 09:14:54.331011  294890 system_pods.go:61] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending
	I1108 09:14:54.331016  294890 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending
	I1108 09:14:54.331023  294890 system_pods.go:61] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending
	I1108 09:14:54.331028  294890 system_pods.go:74] duration metric: took 6.029053ms to wait for pod list to return data ...
	I1108 09:14:54.331044  294890 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:14:54.336258  294890 default_sa.go:45] found service account: "default"
	I1108 09:14:54.336287  294890 default_sa.go:55] duration metric: took 5.236626ms for default service account to be created ...
	I1108 09:14:54.336306  294890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:14:54.354648  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:54.354685  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.354703  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.354709  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending
	I1108 09:14:54.354714  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending
	I1108 09:14:54.354719  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.354724  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.354731  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.354736  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.354750  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.354755  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.354760  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.354781  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.354786  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.354801  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.354808  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.354812  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending
	I1108 09:14:54.354816  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending
	I1108 09:14:54.354821  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending
	I1108 09:14:54.354825  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending
	I1108 09:14:54.354842  294890 retry.go:31] will retry after 233.17935ms: missing components: kube-dns
	I1108 09:14:54.555130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:54.615046  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:54.615091  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.615102  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.615109  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:54.615120  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:54.615125  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.615131  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.615135  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.615147  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.615154  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.615171  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.615184  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.615196  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.615204  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.615211  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.615223  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.615229  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:54.615250  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:54.615263  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:54.615269  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:54.615290  294890 retry.go:31] will retry after 382.644084ms: missing components: kube-dns
	I1108 09:14:54.807035  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:54.907928  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:54.908163  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.003592  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.003654  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:55.003666  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.003677  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.003686  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.003690  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.003696  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.003700  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.003715  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.003723  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.003727  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.003732  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.003738  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.003745  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.003751  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.003760  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.003780  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.003788  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.003802  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.003808  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:55.003943  294890 retry.go:31] will retry after 376.455888ms: missing components: kube-dns
	I1108 09:14:55.045843  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:55.310431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:55.310800  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.313456  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:55.385872  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.385917  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:55.385925  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.385933  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.385942  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.385947  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.385964  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.385976  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.385980  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.385987  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.385997  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.386003  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.386010  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.386020  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.386037  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.386049  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.386055  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.386062  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.386074  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.386080  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:55.386101  294890 retry.go:31] will retry after 424.221664ms: missing components: kube-dns
	I1108 09:14:55.549034  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:55.810007  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:55.810170  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.812257  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:55.815068  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.815098  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Running
	I1108 09:14:55.815108  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.815114  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.815124  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.815130  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.815135  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.815140  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.815144  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.815151  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.815160  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.815167  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.815174  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.815186  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.815192  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.815202  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.815209  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.815217  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.815225  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.815234  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Running
	I1108 09:14:55.815242  294890 system_pods.go:126] duration metric: took 1.478929602s to wait for k8s-apps to be running ...
	I1108 09:14:55.815254  294890 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:14:55.815307  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:14:55.829840  294890 system_svc.go:56] duration metric: took 14.576352ms WaitForService to wait for kubelet
	I1108 09:14:55.829919  294890 kubeadm.go:587] duration metric: took 43.432675695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:14:55.829954  294890 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:14:55.833394  294890 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:14:55.833471  294890 node_conditions.go:123] node cpu capacity is 2
	I1108 09:14:55.833499  294890 node_conditions.go:105] duration metric: took 3.523336ms to run NodePressure ...
	I1108 09:14:55.833524  294890 start.go:242] waiting for startup goroutines ...
	I1108 09:14:56.047718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:56.306801  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:56.309255  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:56.311768  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:56.553079  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:56.808563  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:56.809527  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:56.811638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:57.046692  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:57.319414  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:57.321364  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:57.322120  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:57.546430  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:57.811326  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:57.811776  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:57.814860  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:58.046574  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:58.318714  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:58.318913  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:58.319019  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:58.555584  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:58.807374  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:58.811154  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:58.813486  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:59.051238  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:59.308480  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:59.309888  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:59.311591  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:59.545684  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:59.808328  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:59.811706  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:59.814552  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.068668  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:00.322248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.354824  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:00.355330  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:00.546842  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:00.809950  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:00.813769  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.813822  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.047403  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:01.307278  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:01.309883  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.312244  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:01.545831  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:01.809820  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.809935  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:01.812248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:02.045912  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:02.306523  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:02.309199  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:02.311787  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:02.546737  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:02.806698  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:02.809311  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:02.811618  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:03.047348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:03.310890  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:03.312269  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:03.313047  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:03.545268  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:03.811289  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:03.811801  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:03.812320  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:04.045963  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:04.308457  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:04.310349  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:04.312248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:04.545764  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:04.810306  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:04.810765  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:04.814044  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:05.047262  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:05.309181  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:05.313226  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:05.314123  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:05.546145  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:05.807519  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:05.809950  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:05.813031  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.045513  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:06.310038  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:06.310562  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:06.312836  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.546174  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:06.809242  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:06.812690  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.813130  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.046266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:07.307950  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:07.310068  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.311813  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:07.544699  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:07.807309  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:07.809431  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.811302  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:08.046059  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:08.307538  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:08.310221  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:08.312338  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:08.545712  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:08.809024  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:08.809825  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:08.811941  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.052489  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:09.313564  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:09.315460  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:09.315597  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.546706  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:09.828693  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:09.828830  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.828882  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.050797  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:10.307636  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.309930  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:10.312472  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:10.546762  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:10.807944  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.810058  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:10.815904  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:11.048172  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:11.307290  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:11.310944  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:11.312789  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:11.545049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:11.830893  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:11.831068  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:11.831133  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:12.045456  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:12.308232  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:12.312335  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:12.313643  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:12.545391  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:12.807485  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:12.810116  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:12.812206  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:13.046818  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:13.309496  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:13.309632  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:13.311829  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:13.545100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:13.808471  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:13.810963  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:13.812351  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:14.046791  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:14.308105  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:14.317425  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:14.319049  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:14.545409  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:14.809155  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:14.811878  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:14.812129  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:15.047230  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:15.307487  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:15.312261  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:15.314075  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:15.545186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:15.808107  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:15.861012  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:15.861102  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:16.045830  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:16.307116  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:16.309796  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:16.311899  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:16.545947  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:16.808175  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:16.809208  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:16.810955  294890 kapi.go:107] duration metric: took 58.002150717s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:15:17.045644  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:17.306980  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:17.309007  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:17.545722  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:17.808651  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:17.809707  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:18.046016  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:18.308482  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:18.316810  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:18.546116  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:18.810023  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:18.810883  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:19.046121  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:19.308114  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:19.311457  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:19.547215  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:19.822045  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:19.826648  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:20.046976  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:20.307431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:20.309844  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:20.545494  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:20.807022  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:20.809629  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:21.045622  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:21.308297  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:21.309788  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:21.545713  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:21.807951  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:21.809218  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:22.045490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:22.306747  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:22.309064  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:22.545447  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:22.816052  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:22.816235  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:23.045733  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:23.306625  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:23.308901  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:23.545899  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:23.807038  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:23.813414  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:24.045790  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:24.307440  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:24.309891  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:24.545186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:24.807334  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:24.809687  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:25.047389  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:25.308368  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:25.310644  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:25.545581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:25.808664  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:25.809330  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:26.045975  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:26.307887  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:26.309270  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:26.546287  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:26.809975  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:26.810154  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:27.045142  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:27.309812  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:27.309998  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:27.545114  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:27.807373  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:27.809795  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:28.045300  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:28.307748  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:28.310142  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:28.545421  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:28.808971  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:28.810279  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:29.046479  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:29.309333  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:29.311321  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:29.546197  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:29.807615  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:29.809889  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:30.065725  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:30.307350  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:30.311057  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:30.547858  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:30.808154  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:30.809571  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:31.044965  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:31.307658  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:31.309950  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:31.545886  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:31.807187  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:31.809669  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:32.045205  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:32.308130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:32.311199  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:32.546347  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:32.807190  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:32.810314  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:33.046320  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:33.309490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:33.311245  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:33.545682  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:33.806916  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:33.809457  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:34.046788  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:34.307543  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:34.310013  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:34.546481  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:34.807461  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:34.810790  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:35.048360  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:35.307776  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:35.310861  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:35.546390  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:35.806990  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:35.808984  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:36.045600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:36.306685  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:36.309198  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:36.545825  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:36.807086  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:36.809305  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:37.045956  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:37.309897  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:37.310864  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:37.545499  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:37.807936  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:37.809412  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.046503  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:38.306632  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:38.308876  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.545574  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:38.808431  294890 kapi.go:107] duration metric: took 1m16.50475823s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:15:38.810960  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.811582  294890 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-461635 cluster.
	I1108 09:15:38.814617  294890 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:15:38.817467  294890 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:15:39.045514  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:39.310925  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:39.546600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:39.810354  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:40.046099  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:40.310188  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:40.545924  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:40.809699  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:41.050473  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:41.310228  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:41.545451  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:41.809714  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:42.045398  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:42.310083  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:42.545305  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:42.811620  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:43.044800  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:43.310341  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:43.545860  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:43.809301  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:44.047227  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:44.309378  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:44.545686  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:44.810406  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:45.057343  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:45.312121  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:45.546384  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:45.811767  294890 kapi.go:107] duration metric: took 1m27.005701154s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:15:46.045877  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:46.545866  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:47.046667  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:47.546065  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:48.046316  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:48.545284  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:49.061173  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:49.546066  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:50.047284  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:50.545747  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:51.046271  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:51.547213  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:52.046066  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:52.545872  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:53.048680  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:53.546100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:54.046341  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:54.545563  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:55.046348  294890 kapi.go:107] duration metric: took 1m36.004550257s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:15:55.049532  294890 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, nvidia-device-plugin, default-storageclass, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1108 09:15:55.052534  294890 addons.go:515] duration metric: took 1m42.654879835s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds nvidia-device-plugin default-storageclass storage-provisioner ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1108 09:15:55.052607  294890 start.go:247] waiting for cluster config update ...
	I1108 09:15:55.052633  294890 start.go:256] writing updated cluster config ...
	I1108 09:15:55.053002  294890 ssh_runner.go:195] Run: rm -f paused
	I1108 09:15:55.058717  294890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:15:55.062489  294890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bj8nx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.067650  294890 pod_ready.go:94] pod "coredns-66bc5c9577-bj8nx" is "Ready"
	I1108 09:15:55.067686  294890 pod_ready.go:86] duration metric: took 5.16689ms for pod "coredns-66bc5c9577-bj8nx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.070121  294890 pod_ready.go:83] waiting for pod "etcd-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.075324  294890 pod_ready.go:94] pod "etcd-addons-461635" is "Ready"
	I1108 09:15:55.075393  294890 pod_ready.go:86] duration metric: took 5.187691ms for pod "etcd-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.077818  294890 pod_ready.go:83] waiting for pod "kube-apiserver-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.083268  294890 pod_ready.go:94] pod "kube-apiserver-addons-461635" is "Ready"
	I1108 09:15:55.083349  294890 pod_ready.go:86] duration metric: took 5.497077ms for pod "kube-apiserver-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.086861  294890 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.463630  294890 pod_ready.go:94] pod "kube-controller-manager-addons-461635" is "Ready"
	I1108 09:15:55.463677  294890 pod_ready.go:86] duration metric: took 376.763895ms for pod "kube-controller-manager-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.663658  294890 pod_ready.go:83] waiting for pod "kube-proxy-2b5dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.062817  294890 pod_ready.go:94] pod "kube-proxy-2b5dx" is "Ready"
	I1108 09:15:56.062849  294890 pod_ready.go:86] duration metric: took 399.161509ms for pod "kube-proxy-2b5dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.263098  294890 pod_ready.go:83] waiting for pod "kube-scheduler-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.663473  294890 pod_ready.go:94] pod "kube-scheduler-addons-461635" is "Ready"
	I1108 09:15:56.663566  294890 pod_ready.go:86] duration metric: took 400.439194ms for pod "kube-scheduler-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.663597  294890 pod_ready.go:40] duration metric: took 1.604846793s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:15:56.740653  294890 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 09:15:56.745706  294890 out.go:179] * Done! kubectl is now configured to use "addons-461635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:07 addons-461635 crio[833]: time="2025-11-08T09:18:07.392072185Z" level=info msg="Removed pod sandbox: a656e281bd8b5a44adf442d90fc95f7db071db9504e33c15f6af875b2df36fca" id=0235691f-9637-48fb-99e9-98a2cb1852a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.319441656Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2ztjz/POD" id=1f24a1f1-f1a3-4828-b0f7-7d7bd26dad67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.319511532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.328394404Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2ztjz Namespace:default ID:7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990 UID:1ad35a74-2849-44f4-a8da-2c76da3fa034 NetNS:/var/run/netns/08c44206-6a67-4385-905c-57fc78c7e645 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40019de320}] Aliases:map[]}"
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.328438646Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2ztjz to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.363275021Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2ztjz Namespace:default ID:7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990 UID:1ad35a74-2849-44f4-a8da-2c76da3fa034 NetNS:/var/run/netns/08c44206-6a67-4385-905c-57fc78c7e645 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40019de320}] Aliases:map[]}"
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.363433554Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2ztjz for CNI network kindnet (type=ptp)"
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.371237291Z" level=info msg="Ran pod sandbox 7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990 with infra container: default/hello-world-app-5d498dc89-2ztjz/POD" id=1f24a1f1-f1a3-4828-b0f7-7d7bd26dad67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.372650813Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d0ed664a-7b1d-4787-b5e9-97f8b433c21a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.372787996Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d0ed664a-7b1d-4787-b5e9-97f8b433c21a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.372826314Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d0ed664a-7b1d-4787-b5e9-97f8b433c21a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.377622404Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=15851c94-f7d1-410d-85a9-35fcd8bbcd7b name=/runtime.v1.ImageService/PullImage
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.384941107Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.981600783Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=15851c94-f7d1-410d-85a9-35fcd8bbcd7b name=/runtime.v1.ImageService/PullImage
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.982836054Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=548d9a01-15e9-4ac1-927e-e75b3cd3eaaa name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.987644772Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=26ac4c44-0c9b-40e4-a49a-a17276ae64e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.995271286Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-2ztjz/hello-world-app" id=408b63f2-4608-49a7-9090-b3878ac3b9e3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:55 addons-461635 crio[833]: time="2025-11-08T09:18:55.995604803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.010041755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.010427606Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/01f62a1b9e91cabe35b8c97deb3aa5f37a07fe5e841c4e3decddefbb1f021959/merged/etc/passwd: no such file or directory"
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.010538425Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/01f62a1b9e91cabe35b8c97deb3aa5f37a07fe5e841c4e3decddefbb1f021959/merged/etc/group: no such file or directory"
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.010890913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.039014852Z" level=info msg="Created container 8ccd8b21919498037b21a87b91888f18f6cb930bb0ba5eb1317415cbffdd2801: default/hello-world-app-5d498dc89-2ztjz/hello-world-app" id=408b63f2-4608-49a7-9090-b3878ac3b9e3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.041454477Z" level=info msg="Starting container: 8ccd8b21919498037b21a87b91888f18f6cb930bb0ba5eb1317415cbffdd2801" id=f3b1bb2b-c1cd-4c4d-84e0-49d6a308e062 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:56 addons-461635 crio[833]: time="2025-11-08T09:18:56.049130952Z" level=info msg="Started container" PID=7038 containerID=8ccd8b21919498037b21a87b91888f18f6cb930bb0ba5eb1317415cbffdd2801 description=default/hello-world-app-5d498dc89-2ztjz/hello-world-app id=f3b1bb2b-c1cd-4c4d-84e0-49d6a308e062 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8ccd8b2191949       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   7cd7165d2142c       hello-world-app-5d498dc89-2ztjz             default
	1499eebe42946       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   1f4595c1061a9       nginx                                       default
	24cb90f38a746       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   61c2c0f589b68       busybox                                     default
	cec2fa3f2818c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	e75bb914088f3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	b7edd2dbe2ee2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	0f65082d20771       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	250d2962750d9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   53561e106973e       gadget-tg2w5                                gadget
	5b138abbcda7d       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   d50058827b30f       ingress-nginx-controller-675c5ddd98-sk8px   ingress-nginx
	cef07699d0d39       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   55597fff21533       gcp-auth-78565c9fb4-gvq8l                   gcp-auth
	4a60c053ded9b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	7adfc46b5e895       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   01aa05a2e1595       kube-ingress-dns-minikube                   kube-system
	4eaa15ebd9dc0       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   63e550ba57535       nvidia-device-plugin-daemonset-fdnsr        kube-system
	7ae2515ace62b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   e96f15f7472ba       cloud-spanner-emulator-6f9fcf858b-67xhk     default
	f6ad305097e58       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   3a5ef86e0d5bf       registry-proxy-7g9lx                        kube-system
	e243ab620e43b       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   8cf71cae4c02d       registry-6b586f9694-6xz6d                   kube-system
	9564a18f3ee6e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   51bd23c3e84e3       snapshot-controller-7d9fbc56b8-g8nmj        kube-system
	20fd1a8f9fead       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   4a5c4f71459ba       ingress-nginx-admission-patch-f9wtz         ingress-nginx
	d4439fc0f8e18       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   df8317593172b       ingress-nginx-admission-create-ld89t        ingress-nginx
	6e683377d4d46       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   55362cf653862       metrics-server-85b7d694d7-7rj8w             kube-system
	3bc84587627cb       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   87f47320987c0       yakd-dashboard-5ff678cb9-jdt2n              yakd-dashboard
	f06d60de07926       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   9543a9339b6c5       snapshot-controller-7d9fbc56b8-67l2n        kube-system
	bc405f71c0f23       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	ec999739a0e69       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   5a034617afed0       csi-hostpath-resizer-0                      kube-system
	94ac10889e7e5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   8f80e702f96f2       local-path-provisioner-648f6765c9-t7jnl     local-path-storage
	735a5e20ff11f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   e2b16663885e9       csi-hostpath-attacher-0                     kube-system
	bc2c816611acc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   63ccc2bbbfc81       storage-provisioner                         kube-system
	915d95faab44d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   e8177a5212fa9       coredns-66bc5c9577-bj8nx                    kube-system
	d901d07c82588       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   9f0919d28558b       kube-proxy-2b5dx                            kube-system
	537bc857d2209       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   02dee02e78c10       kindnet-rtsff                               kube-system
	2eaec8104c429       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             4 minutes ago            Running             kube-scheduler                           0                   0bcc4010ea35c       kube-scheduler-addons-461635                kube-system
	deacd1133d379       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             4 minutes ago            Running             kube-controller-manager                  0                   d08b6d802bd27       kube-controller-manager-addons-461635       kube-system
	69e30ea1fe4ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             4 minutes ago            Running             kube-apiserver                           0                   06b0510d551c3       kube-apiserver-addons-461635                kube-system
	79cffc5046936       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             4 minutes ago            Running             etcd                                     0                   7640bb325d860       etcd-addons-461635                          kube-system
	
	
	==> coredns [915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca] <==
	[INFO] 10.244.0.15:51967 - 28265 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002516425s
	[INFO] 10.244.0.15:51967 - 70 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000214214s
	[INFO] 10.244.0.15:51967 - 38621 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000155383s
	[INFO] 10.244.0.15:57444 - 7382 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149942s
	[INFO] 10.244.0.15:57444 - 7135 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126762s
	[INFO] 10.244.0.15:40551 - 23851 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084046s
	[INFO] 10.244.0.15:40551 - 23670 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000187464s
	[INFO] 10.244.0.15:39933 - 8031 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095944s
	[INFO] 10.244.0.15:39933 - 7825 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164211s
	[INFO] 10.244.0.15:50290 - 34346 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00142989s
	[INFO] 10.244.0.15:50290 - 34141 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001467273s
	[INFO] 10.244.0.15:57164 - 47597 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000182255s
	[INFO] 10.244.0.15:57164 - 47128 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00042727s
	[INFO] 10.244.0.19:59482 - 40264 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156466s
	[INFO] 10.244.0.19:42019 - 24731 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191575s
	[INFO] 10.244.0.19:48528 - 48788 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102302s
	[INFO] 10.244.0.19:33460 - 11470 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000200478s
	[INFO] 10.244.0.19:59069 - 20403 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000246345s
	[INFO] 10.244.0.19:48940 - 19625 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166131s
	[INFO] 10.244.0.19:36498 - 61627 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002525697s
	[INFO] 10.244.0.19:47129 - 63874 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001921431s
	[INFO] 10.244.0.19:33603 - 17104 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001193884s
	[INFO] 10.244.0.19:52924 - 14532 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001940189s
	[INFO] 10.244.0.23:35722 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000193742s
	[INFO] 10.244.0.23:51185 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140432s
	
	
	==> describe nodes <==
	Name:               addons-461635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-461635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=addons-461635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_14_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-461635
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-461635"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:14:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-461635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:52 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:52 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:52 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:52 +0000   Sat, 08 Nov 2025 09:14:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-461635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e197f1ed-5acc-41d9-9508-112a7409480b
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-6f9fcf858b-67xhk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  default                     hello-world-app-5d498dc89-2ztjz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-tg2w5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  gcp-auth                    gcp-auth-78565c9fb4-gvq8l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sk8px    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m39s
	  kube-system                 coredns-66bc5c9577-bj8nx                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m45s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpathplugin-z6vwk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-addons-461635                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m50s
	  kube-system                 kindnet-rtsff                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m46s
	  kube-system                 kube-apiserver-addons-461635                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-controller-manager-addons-461635        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-proxy-2b5dx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-scheduler-addons-461635                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 metrics-server-85b7d694d7-7rj8w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m40s
	  kube-system                 nvidia-device-plugin-daemonset-fdnsr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 registry-6b586f9694-6xz6d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-creds-764b6fb674-ch6rs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-proxy-7g9lx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-67l2n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 snapshot-controller-7d9fbc56b8-g8nmj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  local-path-storage          local-path-provisioner-648f6765c9-t7jnl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-jdt2n               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node addons-461635 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node addons-461635 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s (x8 over 4m58s)  kubelet          Node addons-461635 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m50s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m50s                  kubelet          Node addons-461635 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m50s                  kubelet          Node addons-461635 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m50s                  kubelet          Node addons-461635 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m46s                  node-controller  Node addons-461635 event: Registered Node addons-461635 in Controller
	  Normal   NodeReady                4m4s                   kubelet          Node addons-461635 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014865] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.528312] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034771] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.823038] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.933277] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 8 08:21] hrtimer: interrupt took 14263725 ns
	[Nov 8 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[  +0.129013] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a] <==
	{"level":"warn","ts":"2025-11-08T09:14:02.728255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.763271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.785133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.822145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.862145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.885483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.903380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.946361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.981213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.041252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.077106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.120875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.165429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.180685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.206338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.275320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.293426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.316370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.443558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:19.368386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:19.383932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.413137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.427361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.472479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.481833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50312","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cef07699d0d3923c65c9264b7f7b78caef0279434b6e6c391a1ba8971d303b93] <==
	2025/11/08 09:15:38 GCP Auth Webhook started!
	2025/11/08 09:15:57 Ready to marshal response ...
	2025/11/08 09:15:57 Ready to write response ...
	2025/11/08 09:15:57 Ready to marshal response ...
	2025/11/08 09:15:57 Ready to write response ...
	2025/11/08 09:15:58 Ready to marshal response ...
	2025/11/08 09:15:58 Ready to write response ...
	2025/11/08 09:16:19 Ready to marshal response ...
	2025/11/08 09:16:19 Ready to write response ...
	2025/11/08 09:16:22 Ready to marshal response ...
	2025/11/08 09:16:22 Ready to write response ...
	2025/11/08 09:16:22 Ready to marshal response ...
	2025/11/08 09:16:22 Ready to write response ...
	2025/11/08 09:16:30 Ready to marshal response ...
	2025/11/08 09:16:30 Ready to write response ...
	2025/11/08 09:16:34 Ready to marshal response ...
	2025/11/08 09:16:34 Ready to write response ...
	2025/11/08 09:16:48 Ready to marshal response ...
	2025/11/08 09:16:48 Ready to write response ...
	2025/11/08 09:17:02 Ready to marshal response ...
	2025/11/08 09:17:02 Ready to write response ...
	2025/11/08 09:18:54 Ready to marshal response ...
	2025/11/08 09:18:54 Ready to write response ...
	
	
	==> kernel <==
	 09:18:57 up  2:01,  0 user,  load average: 0.52, 1.95, 2.77
	Linux addons-461635 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806] <==
	I1108 09:16:53.245380       1 main.go:301] handling current node
	I1108 09:17:03.244795       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:03.244832       1 main.go:301] handling current node
	I1108 09:17:13.244784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:13.244832       1 main.go:301] handling current node
	I1108 09:17:23.245041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:23.245076       1 main.go:301] handling current node
	I1108 09:17:33.253900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:33.253936       1 main.go:301] handling current node
	I1108 09:17:43.244988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:43.245022       1 main.go:301] handling current node
	I1108 09:17:53.244948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:17:53.245085       1 main.go:301] handling current node
	I1108 09:18:03.244767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:03.244805       1 main.go:301] handling current node
	I1108 09:18:13.249427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:13.249544       1 main.go:301] handling current node
	I1108 09:18:23.245120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:23.245156       1 main.go:301] handling current node
	I1108 09:18:33.253018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:33.253054       1 main.go:301] handling current node
	I1108 09:18:43.253058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:43.253167       1 main.go:301] handling current node
	I1108 09:18:53.249006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:18:53.249040       1 main.go:301] handling current node
	
	
	==> kube-apiserver [69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc] <==
	W1108 09:14:41.413293       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:14:41.427160       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:41.466338       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:41.481851       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:53.864764       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.865079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	W1108 09:14:53.868507       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.868546       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	W1108 09:14:53.968330       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.968372       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	E1108 09:15:11.680357       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	W1108 09:15:11.680546       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:15:11.681446       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:15:11.681365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	E1108 09:15:11.686348       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	I1108 09:15:11.859523       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:16:06.197482       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42126: use of closed network connection
	I1108 09:16:34.520594       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 09:16:34.849985       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.40.188"}
	I1108 09:16:58.527309       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1108 09:17:00.294239       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1108 09:18:55.205892       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.55.46"}
	
	
	==> kube-controller-manager [deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e] <==
	I1108 09:14:11.435607       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:14:11.435673       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:14:11.435868       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:14:11.435995       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:14:11.436182       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:14:11.436415       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:14:11.436449       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:14:11.436189       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:14:11.437612       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:14:11.437678       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:14:11.441937       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:14:11.443100       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:14:11.445356       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:14:11.446604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 09:14:17.685298       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 09:14:41.405794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:14:41.405950       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:14:41.405995       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:14:41.454414       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:14:41.458446       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:14:41.506426       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:14:41.558750       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:14:56.427476       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:15:11.511602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:15:11.568219       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854] <==
	I1108 09:14:13.412342       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:14:13.526484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:14:13.628991       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:14:13.629026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:14:13.629147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:14:13.689209       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:14:13.689263       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:14:13.699706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:14:13.699995       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:14:13.700019       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:14:13.705867       1 config.go:200] "Starting service config controller"
	I1108 09:14:13.705893       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:14:13.705923       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:14:13.705928       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:14:13.705941       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:14:13.705945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:14:13.706587       1 config.go:309] "Starting node config controller"
	I1108 09:14:13.706601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:14:13.706607       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:14:13.806398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:14:13.806441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:14:13.806469       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e] <==
	I1108 09:14:04.855168       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:14:06.157277       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:14:06.157382       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:14:06.157416       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:14:06.157470       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:14:06.185860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:14:06.186266       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:14:06.188514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:14:06.188547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:14:06.189467       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:14:06.189577       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:14:06.192097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 09:14:07.788638       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:07 addons-461635 kubelet[1278]: E1108 09:17:07.380652    1278 manager.go:1116] Failed to create existing container: /crio-39511ed67b7c55da65b0d445374aac1b41d08b409d1e42753dea166a59e184a5: Error finding container 39511ed67b7c55da65b0d445374aac1b41d08b409d1e42753dea166a59e184a5: Status 404 returned error can't find the container with id 39511ed67b7c55da65b0d445374aac1b41d08b409d1e42753dea166a59e184a5
	Nov 08 09:17:07 addons-461635 kubelet[1278]: E1108 09:17:07.382548    1278 manager.go:1116] Failed to create existing container: /docker/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/crio-2d23d64b304af482abe679e49aa47bf943756540fba31cc6637e6a62fb2018cd: Error finding container 2d23d64b304af482abe679e49aa47bf943756540fba31cc6637e6a62fb2018cd: Status 404 returned error can't find the container with id 2d23d64b304af482abe679e49aa47bf943756540fba31cc6637e6a62fb2018cd
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.440022    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af4f0e68-bc83-11f0-8355-ba90c1ff9fca\") pod \"4b50e1ac-6970-4b76-a0be-e86527115403\" (UID: \"4b50e1ac-6970-4b76-a0be-e86527115403\") "
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.440072    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4b50e1ac-6970-4b76-a0be-e86527115403-gcp-creds\") pod \"4b50e1ac-6970-4b76-a0be-e86527115403\" (UID: \"4b50e1ac-6970-4b76-a0be-e86527115403\") "
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.440096    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcr5t\" (UniqueName: \"kubernetes.io/projected/4b50e1ac-6970-4b76-a0be-e86527115403-kube-api-access-zcr5t\") pod \"4b50e1ac-6970-4b76-a0be-e86527115403\" (UID: \"4b50e1ac-6970-4b76-a0be-e86527115403\") "
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.440954    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b50e1ac-6970-4b76-a0be-e86527115403-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4b50e1ac-6970-4b76-a0be-e86527115403" (UID: "4b50e1ac-6970-4b76-a0be-e86527115403"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.442619    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b50e1ac-6970-4b76-a0be-e86527115403-kube-api-access-zcr5t" (OuterVolumeSpecName: "kube-api-access-zcr5t") pod "4b50e1ac-6970-4b76-a0be-e86527115403" (UID: "4b50e1ac-6970-4b76-a0be-e86527115403"). InnerVolumeSpecName "kube-api-access-zcr5t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.444706    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^af4f0e68-bc83-11f0-8355-ba90c1ff9fca" (OuterVolumeSpecName: "task-pv-storage") pod "4b50e1ac-6970-4b76-a0be-e86527115403" (UID: "4b50e1ac-6970-4b76-a0be-e86527115403"). InnerVolumeSpecName "pvc-08bf13b0-2086-49fa-b737-181e4f50dd8b". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.541565    1278 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-08bf13b0-2086-49fa-b737-181e4f50dd8b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af4f0e68-bc83-11f0-8355-ba90c1ff9fca\") on node \"addons-461635\" "
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.541606    1278 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4b50e1ac-6970-4b76-a0be-e86527115403-gcp-creds\") on node \"addons-461635\" DevicePath \"\""
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.541621    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zcr5t\" (UniqueName: \"kubernetes.io/projected/4b50e1ac-6970-4b76-a0be-e86527115403-kube-api-access-zcr5t\") on node \"addons-461635\" DevicePath \"\""
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.546921    1278 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-08bf13b0-2086-49fa-b737-181e4f50dd8b" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^af4f0e68-bc83-11f0-8355-ba90c1ff9fca") on node "addons-461635"
	Nov 08 09:17:10 addons-461635 kubelet[1278]: I1108 09:17:10.642503    1278 reconciler_common.go:299] "Volume detached for volume \"pvc-08bf13b0-2086-49fa-b737-181e4f50dd8b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af4f0e68-bc83-11f0-8355-ba90c1ff9fca\") on node \"addons-461635\" DevicePath \"\""
	Nov 08 09:17:11 addons-461635 kubelet[1278]: I1108 09:17:11.232579    1278 scope.go:117] "RemoveContainer" containerID="4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564"
	Nov 08 09:17:11 addons-461635 kubelet[1278]: I1108 09:17:11.246921    1278 scope.go:117] "RemoveContainer" containerID="4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564"
	Nov 08 09:17:11 addons-461635 kubelet[1278]: E1108 09:17:11.249985    1278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564\": container with ID starting with 4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564 not found: ID does not exist" containerID="4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564"
	Nov 08 09:17:11 addons-461635 kubelet[1278]: I1108 09:17:11.250110    1278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564"} err="failed to get container status \"4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564\": rpc error: code = NotFound desc = could not find container \"4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564\": container with ID starting with 4023670ca6f8cc268d79151bc78ddb265c8ef65e3d3a5d02045592bc4a954564 not found: ID does not exist"
	Nov 08 09:17:13 addons-461635 kubelet[1278]: I1108 09:17:13.209555    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b50e1ac-6970-4b76-a0be-e86527115403" path="/var/lib/kubelet/pods/4b50e1ac-6970-4b76-a0be-e86527115403/volumes"
	Nov 08 09:17:32 addons-461635 kubelet[1278]: I1108 09:17:32.207058    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7g9lx" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:17:41 addons-461635 kubelet[1278]: I1108 09:17:41.206718    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-6xz6d" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:18:08 addons-461635 kubelet[1278]: I1108 09:18:08.206397    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fdnsr" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:18:55 addons-461635 kubelet[1278]: I1108 09:18:55.053491    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xbf\" (UniqueName: \"kubernetes.io/projected/1ad35a74-2849-44f4-a8da-2c76da3fa034-kube-api-access-49xbf\") pod \"hello-world-app-5d498dc89-2ztjz\" (UID: \"1ad35a74-2849-44f4-a8da-2c76da3fa034\") " pod="default/hello-world-app-5d498dc89-2ztjz"
	Nov 08 09:18:55 addons-461635 kubelet[1278]: I1108 09:18:55.053587    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1ad35a74-2849-44f4-a8da-2c76da3fa034-gcp-creds\") pod \"hello-world-app-5d498dc89-2ztjz\" (UID: \"1ad35a74-2849-44f4-a8da-2c76da3fa034\") " pod="default/hello-world-app-5d498dc89-2ztjz"
	Nov 08 09:18:55 addons-461635 kubelet[1278]: W1108 09:18:55.370726    1278 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/crio-7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990 WatchSource:0}: Error finding container 7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990: Status 404 returned error can't find the container with id 7cd7165d2142c66d20156afb4f6be1e23c0972db1644f3d0458c5d8130ee3990
	Nov 08 09:18:56 addons-461635 kubelet[1278]: I1108 09:18:56.207180    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7g9lx" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2] <==
	W1108 09:18:31.995867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:34.001261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:34.006559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:36.012022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:36.017119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:38.020351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:38.027468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:40.036679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:40.042676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:42.046356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:42.051452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:44.054388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:44.061521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:46.065265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:46.070398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:48.073444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:48.078846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:50.082272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:50.087420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:52.090172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:52.094730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:54.098164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:54.105681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:56.109066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:56.114185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-461635 -n addons-461635
helpers_test.go:269: (dbg) Run:  kubectl --context addons-461635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs: exit status 1 (87.401496ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ld89t" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f9wtz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ch6rs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (302.271303ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:18:58.218363  304292 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:58.219117  304292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:58.219132  304292 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:58.219138  304292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:58.219539  304292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:18:58.219860  304292 mustload.go:66] Loading cluster: addons-461635
	I1108 09:18:58.220828  304292 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:58.220854  304292 addons.go:607] checking whether the cluster is paused
	I1108 09:18:58.221017  304292 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:58.221034  304292 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:18:58.221500  304292 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:18:58.241094  304292 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:58.241158  304292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:18:58.258335  304292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:18:58.372620  304292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:58.372714  304292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:58.424449  304292 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:18:58.424472  304292 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:18:58.424478  304292 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:18:58.424482  304292 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:18:58.424486  304292 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:18:58.424490  304292 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:18:58.424493  304292 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:18:58.424497  304292 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:18:58.424501  304292 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:18:58.424511  304292 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:18:58.424520  304292 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:18:58.424523  304292 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:18:58.424526  304292 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:18:58.424530  304292 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:18:58.424533  304292 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:18:58.424541  304292 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:18:58.424552  304292 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:18:58.424557  304292 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:18:58.424561  304292 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:18:58.424564  304292 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:18:58.424568  304292 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:18:58.424571  304292 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:18:58.424575  304292 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:18:58.424578  304292 cri.go:89] found id: ""
	I1108 09:18:58.424627  304292 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:58.444228  304292 out.go:203] 
	W1108 09:18:58.447230  304292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:18:58.447335  304292 out.go:285] * 
	* 
	W1108 09:18:58.458897  304292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:18:58.463435  304292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable ingress --alsologtostderr -v=1: exit status 11 (272.515427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:18:58.520599  304405 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:58.521526  304405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:58.521541  304405 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:58.521547  304405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:58.521880  304405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:18:58.522242  304405 mustload.go:66] Loading cluster: addons-461635
	I1108 09:18:58.522672  304405 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:58.522693  304405 addons.go:607] checking whether the cluster is paused
	I1108 09:18:58.522838  304405 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:58.522856  304405 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:18:58.523348  304405 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:18:58.541164  304405 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:58.541217  304405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:18:58.572583  304405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:18:58.679437  304405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:58.679574  304405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:58.709488  304405 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:18:58.709512  304405 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:18:58.709517  304405 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:18:58.709521  304405 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:18:58.709524  304405 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:18:58.709528  304405 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:18:58.709531  304405 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:18:58.709534  304405 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:18:58.709537  304405 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:18:58.709543  304405 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:18:58.709547  304405 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:18:58.709550  304405 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:18:58.709553  304405 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:18:58.709556  304405 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:18:58.709559  304405 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:18:58.709564  304405 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:18:58.709572  304405 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:18:58.709575  304405 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:18:58.709578  304405 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:18:58.709581  304405 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:18:58.709586  304405 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:18:58.709589  304405 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:18:58.709596  304405 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:18:58.709599  304405 cri.go:89] found id: ""
	I1108 09:18:58.709654  304405 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:58.725369  304405 out.go:203] 
	W1108 09:18:58.728885  304405 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:18:58.728929  304405 out.go:285] * 
	* 
	W1108 09:18:58.735410  304405 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:18:58.738866  304405 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-tg2w5" [842cd1c8-8a41-4997-9a68-d6f17f8d5742] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003759531s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.084061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:18.260505  303282 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:18.261311  303282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:18.261328  303282 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:18.261334  303282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:18.261626  303282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:17:18.261951  303282 mustload.go:66] Loading cluster: addons-461635
	I1108 09:17:18.262366  303282 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:18.262385  303282 addons.go:607] checking whether the cluster is paused
	I1108 09:17:18.262500  303282 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:18.262515  303282 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:17:18.263007  303282 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:17:18.280019  303282 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:18.280092  303282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:17:18.297930  303282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:17:18.403464  303282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:18.403545  303282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:18.434100  303282 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:17:18.434132  303282 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:17:18.434137  303282 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:17:18.434141  303282 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:17:18.434144  303282 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:17:18.434150  303282 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:17:18.434167  303282 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:17:18.434171  303282 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:17:18.434175  303282 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:17:18.434182  303282 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:17:18.434185  303282 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:17:18.434189  303282 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:17:18.434192  303282 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:17:18.434200  303282 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:17:18.434204  303282 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:17:18.434209  303282 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:17:18.434218  303282 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:17:18.434259  303282 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:17:18.434269  303282 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:17:18.434273  303282 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:17:18.434279  303282 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:17:18.434283  303282 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:17:18.434286  303282 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:17:18.434290  303282 cri.go:89] found id: ""
	I1108 09:17:18.434359  303282 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:18.449690  303282 out.go:203] 
	W1108 09:17:18.452588  303282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:18.452627  303282 out.go:285] * 
	* 
	W1108 09:17:18.459172  303282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:18.462335  303282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.156954ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00332996s
addons_test.go:463: (dbg) Run:  kubectl --context addons-461635 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (266.636743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:33.954340  302180 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:33.955292  302180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:33.955338  302180 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:33.955361  302180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:33.955667  302180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:33.956010  302180 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:33.956418  302180 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:33.956454  302180 addons.go:607] checking whether the cluster is paused
	I1108 09:16:33.956597  302180 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:33.956628  302180 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:33.957200  302180 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:33.974409  302180 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:33.974468  302180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:33.993820  302180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:34.107486  302180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:34.107571  302180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:34.138751  302180 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:34.138771  302180 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:34.138777  302180 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:34.138781  302180 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:34.138785  302180 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:34.138789  302180 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:34.138792  302180 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:34.138795  302180 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:34.138798  302180 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:34.138804  302180 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:34.138808  302180 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:34.138811  302180 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:34.138815  302180 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:34.138818  302180 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:34.138822  302180 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:34.138828  302180 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:34.138832  302180 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:34.138837  302180 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:34.138840  302180 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:34.138843  302180 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:34.138854  302180 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:34.138858  302180 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:34.138861  302180 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:34.138864  302180 cri.go:89] found id: ""
	I1108 09:16:34.138912  302180 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:34.153389  302180 out.go:203] 
	W1108 09:16:34.156240  302180 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:34.156271  302180 out.go:285] * 
	* 
	W1108 09:16:34.162694  302180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:34.165655  302180 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1108 09:16:30.957277  294085 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 09:16:30.961243  294085 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 09:16:30.961267  294085 kapi.go:107] duration metric: took 3.998206ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.00788ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-461635 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-461635 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ad710aa8-cb99-44fa-b52d-4fec231ab354] Pending
helpers_test.go:352: "task-pv-pod" [ad710aa8-cb99-44fa-b52d-4fec231ab354] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ad710aa8-cb99-44fa-b52d-4fec231ab354] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004163393s
addons_test.go:572: (dbg) Run:  kubectl --context addons-461635 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-461635 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-461635 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-461635 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-461635 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-461635 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-461635 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4b50e1ac-6970-4b76-a0be-e86527115403] Pending
helpers_test.go:352: "task-pv-pod-restore" [4b50e1ac-6970-4b76-a0be-e86527115403] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4b50e1ac-6970-4b76-a0be-e86527115403] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004111642s
addons_test.go:614: (dbg) Run:  kubectl --context addons-461635 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-461635 delete pod task-pv-pod-restore: (1.193405522s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-461635 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-461635 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (284.120155ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:11.724193  303160 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:11.724903  303160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:11.724955  303160 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:11.724961  303160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:11.725276  303160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:17:11.725582  303160 mustload.go:66] Loading cluster: addons-461635
	I1108 09:17:11.725938  303160 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:11.725949  303160 addons.go:607] checking whether the cluster is paused
	I1108 09:17:11.726070  303160 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:11.726082  303160 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:17:11.726537  303160 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:17:11.744812  303160 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:11.744884  303160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:17:11.763856  303160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:17:11.872546  303160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:11.872626  303160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:11.901696  303160 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:17:11.901727  303160 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:17:11.901733  303160 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:17:11.901737  303160 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:17:11.901741  303160 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:17:11.901745  303160 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:17:11.901748  303160 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:17:11.901752  303160 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:17:11.901755  303160 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:17:11.901762  303160 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:17:11.901766  303160 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:17:11.901769  303160 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:17:11.901772  303160 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:17:11.901775  303160 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:17:11.901780  303160 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:17:11.901789  303160 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:17:11.901797  303160 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:17:11.901801  303160 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:17:11.901805  303160 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:17:11.901808  303160 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:17:11.901812  303160 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:17:11.901815  303160 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:17:11.901819  303160 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:17:11.901822  303160 cri.go:89] found id: ""
	I1108 09:17:11.901870  303160 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:11.917149  303160 out.go:203] 
	W1108 09:17:11.919968  303160 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:11.920004  303160 out.go:285] * 
	* 
	W1108 09:17:11.926585  303160 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:11.929581  303160 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (269.712162ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:11.985772  303206 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:11.986463  303206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:11.986481  303206 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:11.986487  303206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:11.986762  303206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:17:11.987081  303206 mustload.go:66] Loading cluster: addons-461635
	I1108 09:17:11.987555  303206 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:11.987576  303206 addons.go:607] checking whether the cluster is paused
	I1108 09:17:11.987718  303206 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:11.987738  303206 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:17:11.988348  303206 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:17:12.010637  303206 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:12.010712  303206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:17:12.031784  303206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:17:12.139893  303206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:12.140012  303206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:12.171093  303206 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:17:12.171114  303206 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:17:12.171130  303206 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:17:12.171135  303206 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:17:12.171139  303206 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:17:12.171143  303206 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:17:12.171146  303206 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:17:12.171150  303206 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:17:12.171154  303206 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:17:12.171165  303206 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:17:12.171171  303206 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:17:12.171174  303206 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:17:12.171178  303206 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:17:12.171182  303206 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:17:12.171190  303206 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:17:12.171202  303206 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:17:12.171210  303206 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:17:12.171215  303206 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:17:12.171218  303206 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:17:12.171221  303206 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:17:12.171226  303206 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:17:12.171229  303206 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:17:12.171232  303206 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:17:12.171236  303206 cri.go:89] found id: ""
	I1108 09:17:12.171286  303206 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:12.186794  303206 out.go:203] 
	W1108 09:17:12.189690  303206 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:12.189726  303206 out.go:285] * 
	* 
	W1108 09:17:12.196045  303206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:12.198865  303206 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-461635 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-461635 --alsologtostderr -v=1: exit status 11 (269.000224ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:06.901837  300950 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:06.902588  300950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:06.902604  300950 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:06.902609  300950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:06.902896  300950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:06.903232  300950 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:06.903636  300950 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:06.903657  300950 addons.go:607] checking whether the cluster is paused
	I1108 09:16:06.903797  300950 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:06.903827  300950 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:06.904331  300950 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:06.923088  300950 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:06.923170  300950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:06.940600  300950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:07.047475  300950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:07.047554  300950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:07.079970  300950 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:07.079999  300950 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:07.080005  300950 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:07.080009  300950 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:07.080012  300950 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:07.080015  300950 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:07.080052  300950 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:07.080064  300950 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:07.080068  300950 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:07.080074  300950 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:07.080077  300950 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:07.080081  300950 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:07.080085  300950 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:07.080088  300950 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:07.080091  300950 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:07.080096  300950 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:07.080121  300950 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:07.080127  300950 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:07.080130  300950 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:07.080134  300950 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:07.080139  300950 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:07.080146  300950 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:07.080150  300950 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:07.080153  300950 cri.go:89] found id: ""
	I1108 09:16:07.080221  300950 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:07.095791  300950 out.go:203] 
	W1108 09:16:07.098717  300950 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:07.098749  300950 out.go:285] * 
	* 
	W1108 09:16:07.105393  300950 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:07.108453  300950 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-461635 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-461635
helpers_test.go:243: (dbg) docker inspect addons-461635:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6",
	        "Created": "2025-11-08T09:13:40.933160298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295293,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:13:40.995915601Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/hosts",
	        "LogPath": "/var/lib/docker/containers/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6-json.log",
	        "Name": "/addons-461635",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-461635:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-461635",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6",
	                "LowerDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5da389f65b257a70ef6517eb11b4312d339222d422b2c4f9e8475f505c2f6404/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-461635",
	                "Source": "/var/lib/docker/volumes/addons-461635/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-461635",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-461635",
	                "name.minikube.sigs.k8s.io": "addons-461635",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5883474581e350ae2eca52ea9cc7173a14c2c0663e9df326d2d633cf44ed877",
	            "SandboxKey": "/var/run/docker/netns/f5883474581e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-461635": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:5a:45:39:05:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9daea876ae108ba25eea3cd32aa706b2fe54f1ae544f9d17ff1eb4b284d4fe68",
	                    "EndpointID": "b206692f7ad9f0edde23ceda1d22bcc170384cd340464d5f4cedbf521a0571c2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-461635",
	                        "2c24103c57a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-461635 -n addons-461635
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-461635 logs -n 25: (1.447975698s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-636192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-636192   │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ delete  │ -p download-only-636192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-636192   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ start   │ -o=json --download-only -p download-only-209768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-209768   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ delete  │ -p download-only-209768                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-209768   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ delete  │ -p download-only-636192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-636192   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ delete  │ -p download-only-209768                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-209768   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ start   │ --download-only -p download-docker-036976 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-036976 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ delete  │ -p download-docker-036976                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-036976 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ start   │ --download-only -p binary-mirror-382750 --alsologtostderr --binary-mirror http://127.0.0.1:38109 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-382750   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ delete  │ -p binary-mirror-382750                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-382750   │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ addons  │ enable dashboard -p addons-461635                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-461635                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	│ start   │ -p addons-461635 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:15 UTC │
	│ addons  │ addons-461635 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-461635 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-461635 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-461635          │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:13:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:13:14.929234  294890 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:13:14.929369  294890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:14.929378  294890 out.go:374] Setting ErrFile to fd 2...
	I1108 09:13:14.929384  294890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:14.929650  294890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:13:14.930088  294890 out.go:368] Setting JSON to false
	I1108 09:13:14.930917  294890 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6944,"bootTime":1762586251,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:13:14.930984  294890 start.go:143] virtualization:  
	I1108 09:13:14.934350  294890 out.go:179] * [addons-461635] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:13:14.938196  294890 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:13:14.938351  294890 notify.go:221] Checking for updates...
	I1108 09:13:14.944066  294890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:13:14.946995  294890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:13:14.949850  294890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:13:14.952957  294890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:13:14.955817  294890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:13:14.958875  294890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:13:14.989611  294890 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:13:14.989790  294890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:15.081610  294890 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:15.071746751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:15.081719  294890 docker.go:319] overlay module found
	I1108 09:13:15.085004  294890 out.go:179] * Using the docker driver based on user configuration
	I1108 09:13:15.087840  294890 start.go:309] selected driver: docker
	I1108 09:13:15.087859  294890 start.go:930] validating driver "docker" against <nil>
	I1108 09:13:15.087881  294890 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:13:15.088676  294890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:15.149384  294890 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:15.139924792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:15.149544  294890 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:13:15.149784  294890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:13:15.152707  294890 out.go:179] * Using Docker driver with root privileges
	I1108 09:13:15.155515  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:13:15.155600  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:13:15.155615  294890 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:13:15.155706  294890 start.go:353] cluster config:
	{Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:13:15.158837  294890 out.go:179] * Starting "addons-461635" primary control-plane node in "addons-461635" cluster
	I1108 09:13:15.161557  294890 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:13:15.164531  294890 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:13:15.167397  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:15.167432  294890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:13:15.167451  294890 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:13:15.167461  294890 cache.go:59] Caching tarball of preloaded images
	I1108 09:13:15.167552  294890 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 09:13:15.167562  294890 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:13:15.167896  294890 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json ...
	I1108 09:13:15.167915  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json: {Name:mk80158965353712057df83f45f11f645e406d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:15.184841  294890 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:13:15.185014  294890 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:13:15.185041  294890 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:13:15.185047  294890 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:13:15.185066  294890 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:13:15.185079  294890 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:13:32.985397  294890 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:13:32.985435  294890 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:13:32.985466  294890 start.go:360] acquireMachinesLock for addons-461635: {Name:mk5ac93816e32ad490db32cd4a09ffd11e3e098c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:13:32.986194  294890 start.go:364] duration metric: took 699.995µs to acquireMachinesLock for "addons-461635"
	I1108 09:13:32.986234  294890 start.go:93] Provisioning new machine with config: &{Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:13:32.986330  294890 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:13:32.989823  294890 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:13:32.990062  294890 start.go:159] libmachine.API.Create for "addons-461635" (driver="docker")
	I1108 09:13:32.990102  294890 client.go:173] LocalClient.Create starting
	I1108 09:13:32.990235  294890 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 09:13:33.138094  294890 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 09:13:34.089145  294890 cli_runner.go:164] Run: docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:13:34.107576  294890 cli_runner.go:211] docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:13:34.107673  294890 network_create.go:284] running [docker network inspect addons-461635] to gather additional debugging logs...
	I1108 09:13:34.107694  294890 cli_runner.go:164] Run: docker network inspect addons-461635
	W1108 09:13:34.125389  294890 cli_runner.go:211] docker network inspect addons-461635 returned with exit code 1
	I1108 09:13:34.125443  294890 network_create.go:287] error running [docker network inspect addons-461635]: docker network inspect addons-461635: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-461635 not found
	I1108 09:13:34.125461  294890 network_create.go:289] output of [docker network inspect addons-461635]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-461635 not found
	
	** /stderr **
	I1108 09:13:34.125560  294890 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:13:34.141608  294890 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001969980}
	I1108 09:13:34.141659  294890 network_create.go:124] attempt to create docker network addons-461635 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:13:34.141714  294890 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-461635 addons-461635
	I1108 09:13:34.196256  294890 network_create.go:108] docker network addons-461635 192.168.49.0/24 created
	I1108 09:13:34.196287  294890 kic.go:121] calculated static IP "192.168.49.2" for the "addons-461635" container
	I1108 09:13:34.196368  294890 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:13:34.211553  294890 cli_runner.go:164] Run: docker volume create addons-461635 --label name.minikube.sigs.k8s.io=addons-461635 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:13:34.228706  294890 oci.go:103] Successfully created a docker volume addons-461635
	I1108 09:13:34.228792  294890 cli_runner.go:164] Run: docker run --rm --name addons-461635-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --entrypoint /usr/bin/test -v addons-461635:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:13:36.440449  294890 cli_runner.go:217] Completed: docker run --rm --name addons-461635-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --entrypoint /usr/bin/test -v addons-461635:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.211616396s)
	I1108 09:13:36.440478  294890 oci.go:107] Successfully prepared a docker volume addons-461635
	I1108 09:13:36.440506  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:36.440525  294890 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:13:36.440601  294890 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-461635:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:13:40.853645  294890 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-461635:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413006013s)
	I1108 09:13:40.853676  294890 kic.go:203] duration metric: took 4.413147816s to extract preloaded images to volume ...
	W1108 09:13:40.853842  294890 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 09:13:40.853960  294890 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:13:40.918002  294890 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-461635 --name addons-461635 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-461635 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-461635 --network addons-461635 --ip 192.168.49.2 --volume addons-461635:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:13:41.234823  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Running}}
	I1108 09:13:41.254821  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.278537  294890 cli_runner.go:164] Run: docker exec addons-461635 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:13:41.336856  294890 oci.go:144] the created container "addons-461635" has a running status.
	I1108 09:13:41.336887  294890 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa...
	I1108 09:13:41.803752  294890 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:13:41.822032  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.837896  294890 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:13:41.837918  294890 kic_runner.go:114] Args: [docker exec --privileged addons-461635 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:13:41.883355  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:13:41.900520  294890 machine.go:94] provisionDockerMachine start ...
	I1108 09:13:41.900633  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:41.918412  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:41.918752  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:41.918770  294890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:13:41.919376  294890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45250->127.0.0.1:33138: read: connection reset by peer
	I1108 09:13:45.090458  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-461635
	
	I1108 09:13:45.090483  294890 ubuntu.go:182] provisioning hostname "addons-461635"
	I1108 09:13:45.090559  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.120410  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.120779  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.120798  294890 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-461635 && echo "addons-461635" | sudo tee /etc/hostname
	I1108 09:13:45.314549  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-461635
	
	I1108 09:13:45.314722  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.338072  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.338650  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.338705  294890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-461635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-461635/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-461635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:13:45.497186  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:13:45.497212  294890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 09:13:45.497233  294890 ubuntu.go:190] setting up certificates
	I1108 09:13:45.497264  294890 provision.go:84] configureAuth start
	I1108 09:13:45.497347  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:45.515025  294890 provision.go:143] copyHostCerts
	I1108 09:13:45.515113  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 09:13:45.515243  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 09:13:45.515316  294890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 09:13:45.515381  294890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.addons-461635 san=[127.0.0.1 192.168.49.2 addons-461635 localhost minikube]
	I1108 09:13:45.791521  294890 provision.go:177] copyRemoteCerts
	I1108 09:13:45.791591  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:13:45.791631  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.809323  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:45.912701  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:13:45.930370  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:13:45.947576  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:13:45.964966  294890 provision.go:87] duration metric: took 467.682646ms to configureAuth
	I1108 09:13:45.964991  294890 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:13:45.965177  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:13:45.965282  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:45.982156  294890 main.go:143] libmachine: Using SSH client type: native
	I1108 09:13:45.982464  294890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1108 09:13:45.982485  294890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:13:46.238850  294890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:13:46.238933  294890 machine.go:97] duration metric: took 4.338381809s to provisionDockerMachine
	I1108 09:13:46.238962  294890 client.go:176] duration metric: took 13.248846703s to LocalClient.Create
	I1108 09:13:46.239003  294890 start.go:167] duration metric: took 13.248940308s to libmachine.API.Create "addons-461635"
	I1108 09:13:46.239025  294890 start.go:293] postStartSetup for "addons-461635" (driver="docker")
	I1108 09:13:46.239057  294890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:13:46.239145  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:13:46.239229  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.257122  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.360876  294890 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:13:46.364173  294890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:13:46.364203  294890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:13:46.364215  294890 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 09:13:46.364281  294890 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 09:13:46.364307  294890 start.go:296] duration metric: took 125.254761ms for postStartSetup
	I1108 09:13:46.364635  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:46.381054  294890 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/config.json ...
	I1108 09:13:46.381334  294890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:13:46.381386  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.397798  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.497721  294890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:13:46.502150  294890 start.go:128] duration metric: took 13.515804486s to createHost
	I1108 09:13:46.502176  294890 start.go:83] releasing machines lock for "addons-461635", held for 13.515958745s
	I1108 09:13:46.502246  294890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-461635
	I1108 09:13:46.522492  294890 ssh_runner.go:195] Run: cat /version.json
	I1108 09:13:46.522519  294890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:13:46.522546  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.522585  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:13:46.545834  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.546255  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:13:46.648535  294890 ssh_runner.go:195] Run: systemctl --version
	I1108 09:13:46.782919  294890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:13:46.819424  294890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:13:46.823688  294890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:13:46.823760  294890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:13:46.852866  294890 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 09:13:46.852888  294890 start.go:496] detecting cgroup driver to use...
	I1108 09:13:46.852936  294890 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 09:13:46.852990  294890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:13:46.869780  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:13:46.882426  294890 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:13:46.882493  294890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:13:46.900220  294890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:13:46.919128  294890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:13:47.029594  294890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:13:47.150541  294890 docker.go:234] disabling docker service ...
	I1108 09:13:47.150613  294890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:13:47.171731  294890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:13:47.183828  294890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:13:47.290891  294890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:13:47.400677  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:13:47.412720  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:13:47.426053  294890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:13:47.426117  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.434208  294890 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:13:47.434273  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.442685  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.450913  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.458929  294890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:13:47.466510  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.474736  294890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.487870  294890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:13:47.496370  294890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:13:47.503634  294890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:13:47.511076  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:13:47.619106  294890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:13:47.735018  294890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:13:47.735107  294890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:13:47.738959  294890 start.go:564] Will wait 60s for crictl version
	I1108 09:13:47.739025  294890 ssh_runner.go:195] Run: which crictl
	I1108 09:13:47.742161  294890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:13:47.765154  294890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:13:47.765297  294890 ssh_runner.go:195] Run: crio --version
	I1108 09:13:47.793309  294890 ssh_runner.go:195] Run: crio --version
	I1108 09:13:47.829428  294890 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:13:47.832365  294890 cli_runner.go:164] Run: docker network inspect addons-461635 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:13:47.848872  294890 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:13:47.852702  294890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:13:47.862228  294890 kubeadm.go:884] updating cluster {Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:13:47.862347  294890 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:47.862406  294890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:13:47.896897  294890 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:13:47.896951  294890 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:13:47.897009  294890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:13:47.922027  294890 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:13:47.922048  294890 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:13:47.922056  294890 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:13:47.922146  294890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-461635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:13:47.922231  294890 ssh_runner.go:195] Run: crio config
	I1108 09:13:47.993794  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:13:47.993819  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:13:47.993835  294890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:13:47.993860  294890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-461635 NodeName:addons-461635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:13:47.993990  294890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-461635"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:13:47.994065  294890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:13:48.002817  294890 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:13:48.002906  294890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:13:48.012417  294890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:13:48.027383  294890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:13:48.041363  294890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:13:48.055171  294890 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:13:48.058920  294890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:13:48.068978  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:13:48.176174  294890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:13:48.191871  294890 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635 for IP: 192.168.49.2
	I1108 09:13:48.191941  294890 certs.go:195] generating shared ca certs ...
	I1108 09:13:48.191973  294890 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.192150  294890 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 09:13:48.544547  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt ...
	I1108 09:13:48.544580  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt: {Name:mke8c25306173191bbb978cc6b31777620639408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.545376  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key ...
	I1108 09:13:48.545397  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key: {Name:mkc48658a22731476e821f52cd5e14ba7058b5b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.545535  294890 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 09:13:48.726279  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt ...
	I1108 09:13:48.726308  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt: {Name:mka45cedd4150e66b2aea13b1729389e2dff3937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.726488  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key ...
	I1108 09:13:48.726503  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key: {Name:mkfd18642b86eb3301c865accec77a9eec51dea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:48.726584  294890 certs.go:257] generating profile certs ...
	I1108 09:13:48.726650  294890 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key
	I1108 09:13:48.726669  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt with IP's: []
	I1108 09:13:49.194093  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt ...
	I1108 09:13:49.194125  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: {Name:mk5e8f185890f69ee75504fd11f70ef4a8cb1585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.194321  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key ...
	I1108 09:13:49.194336  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.key: {Name:mk1721b65dea7348aad0517764302f1f8a3d0be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.195100  294890 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1
	I1108 09:13:49.195125  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:13:49.370163  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 ...
	I1108 09:13:49.370190  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1: {Name:mkb0741c9eda29b989f745dd1aab0e87f7499d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.370358  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1 ...
	I1108 09:13:49.370372  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1: {Name:mk183102dcd9a2b367f13b3d268a66590afcd934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:49.370456  294890 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt.f1a283e1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt
	I1108 09:13:49.370531  294890 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key.f1a283e1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key
	I1108 09:13:49.370592  294890 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key
	I1108 09:13:49.370611  294890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt with IP's: []
	I1108 09:13:50.074708  294890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt ...
	I1108 09:13:50.074741  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt: {Name:mk18c109963015c3ea7a23f35f9df2d631cbb402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:50.074955  294890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key ...
	I1108 09:13:50.074973  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key: {Name:mke729483fdd3d313e213d9507bf0dcd52c2aa18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:13:50.075900  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:13:50.075947  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:13:50.075978  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:13:50.076010  294890 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 09:13:50.076601  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:13:50.096397  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:13:50.116365  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:13:50.134588  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 09:13:50.153870  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:13:50.172313  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:13:50.189844  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:13:50.207211  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:13:50.224509  294890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:13:50.241558  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:13:50.254755  294890 ssh_runner.go:195] Run: openssl version
	I1108 09:13:50.260967  294890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:13:50.269262  294890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.272804  294890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.272975  294890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:13:50.317744  294890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:13:50.326186  294890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:13:50.329588  294890 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:13:50.329637  294890 kubeadm.go:401] StartCluster: {Name:addons-461635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-461635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:13:50.329710  294890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:13:50.329764  294890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:13:50.355696  294890 cri.go:89] found id: ""
	I1108 09:13:50.355773  294890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:13:50.363383  294890 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:13:50.371169  294890 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:13:50.371235  294890 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:13:50.378840  294890 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:13:50.378861  294890 kubeadm.go:158] found existing configuration files:
	
	I1108 09:13:50.378937  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:13:50.386377  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:13:50.386442  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:13:50.393811  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:13:50.401514  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:13:50.401601  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:13:50.408867  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:13:50.416470  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:13:50.416575  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:13:50.423734  294890 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:13:50.431967  294890 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:13:50.432031  294890 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:13:50.439200  294890 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:13:50.529021  294890 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 09:13:50.529358  294890 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 09:13:50.605892  294890 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:14:07.834534  294890 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:14:07.834610  294890 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:14:07.834735  294890 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:14:07.834804  294890 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 09:14:07.834853  294890 kubeadm.go:319] OS: Linux
	I1108 09:14:07.834918  294890 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:14:07.834990  294890 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 09:14:07.835045  294890 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:14:07.835109  294890 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:14:07.835179  294890 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:14:07.835257  294890 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:14:07.835307  294890 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:14:07.835358  294890 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:14:07.835430  294890 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 09:14:07.835514  294890 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:14:07.835613  294890 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:14:07.835707  294890 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:14:07.835772  294890 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:14:07.838791  294890 out.go:252]   - Generating certificates and keys ...
	I1108 09:14:07.838887  294890 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:14:07.838961  294890 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:14:07.839040  294890 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:14:07.839105  294890 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:14:07.839174  294890 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:14:07.839231  294890 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:14:07.839292  294890 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:14:07.839417  294890 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-461635 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:14:07.839496  294890 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:14:07.839622  294890 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-461635 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:14:07.839693  294890 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:14:07.839763  294890 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:14:07.839814  294890 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:14:07.839878  294890 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:14:07.839934  294890 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:14:07.839998  294890 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:14:07.840062  294890 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:14:07.840134  294890 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:14:07.840200  294890 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:14:07.840289  294890 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:14:07.840377  294890 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:14:07.843480  294890 out.go:252]   - Booting up control plane ...
	I1108 09:14:07.843598  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:14:07.843684  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:14:07.843761  294890 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:14:07.843878  294890 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:14:07.843999  294890 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:14:07.844116  294890 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:14:07.844211  294890 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:14:07.844255  294890 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:14:07.844435  294890 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:14:07.844578  294890 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:14:07.844651  294890 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501767819s
	I1108 09:14:07.844791  294890 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:14:07.844947  294890 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:14:07.845058  294890 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:14:07.845165  294890 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:14:07.845295  294890 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.719539162s
	I1108 09:14:07.845371  294890 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.297685716s
	I1108 09:14:07.845449  294890 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502237044s
	I1108 09:14:07.845606  294890 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:14:07.845806  294890 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:14:07.845921  294890 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:14:07.846169  294890 kubeadm.go:319] [mark-control-plane] Marking the node addons-461635 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:14:07.846248  294890 kubeadm.go:319] [bootstrap-token] Using token: 29waul.3t39uxcwk9pz3oyr
	I1108 09:14:07.851141  294890 out.go:252]   - Configuring RBAC rules ...
	I1108 09:14:07.851302  294890 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:14:07.851405  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:14:07.851556  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:14:07.851699  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:14:07.851825  294890 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:14:07.851920  294890 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:14:07.852043  294890 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:14:07.852092  294890 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:14:07.852145  294890 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:14:07.852154  294890 kubeadm.go:319] 
	I1108 09:14:07.852217  294890 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:14:07.852224  294890 kubeadm.go:319] 
	I1108 09:14:07.852305  294890 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:14:07.852315  294890 kubeadm.go:319] 
	I1108 09:14:07.852348  294890 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:14:07.852411  294890 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:14:07.852470  294890 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:14:07.852480  294890 kubeadm.go:319] 
	I1108 09:14:07.852539  294890 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:14:07.852546  294890 kubeadm.go:319] 
	I1108 09:14:07.852596  294890 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:14:07.852600  294890 kubeadm.go:319] 
	I1108 09:14:07.852655  294890 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:14:07.852732  294890 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:14:07.852803  294890 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:14:07.852808  294890 kubeadm.go:319] 
	I1108 09:14:07.852897  294890 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:14:07.853109  294890 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:14:07.853118  294890 kubeadm.go:319] 
	I1108 09:14:07.853206  294890 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 29waul.3t39uxcwk9pz3oyr \
	I1108 09:14:07.853314  294890 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 09:14:07.853337  294890 kubeadm.go:319] 	--control-plane 
	I1108 09:14:07.853342  294890 kubeadm.go:319] 
	I1108 09:14:07.853431  294890 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:14:07.853435  294890 kubeadm.go:319] 
	I1108 09:14:07.853521  294890 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 29waul.3t39uxcwk9pz3oyr \
	I1108 09:14:07.853644  294890 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 09:14:07.853653  294890 cni.go:84] Creating CNI manager for ""
	I1108 09:14:07.853660  294890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:14:07.856683  294890 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:14:07.859701  294890 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:14:07.863721  294890 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:14:07.863791  294890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:14:07.877859  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:14:08.161168  294890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:14:08.161401  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:08.161527  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-461635 minikube.k8s.io/updated_at=2025_11_08T09_14_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=addons-461635 minikube.k8s.io/primary=true
	I1108 09:14:08.295981  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:08.296038  294890 ops.go:34] apiserver oom_adj: -16
	I1108 09:14:08.796202  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:09.297030  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:09.796782  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:10.296722  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:10.797047  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:11.296832  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:11.796746  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:12.296104  294890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:14:12.396381  294890 kubeadm.go:1114] duration metric: took 4.235043126s to wait for elevateKubeSystemPrivileges
	I1108 09:14:12.396408  294890 kubeadm.go:403] duration metric: took 22.066773896s to StartCluster
	I1108 09:14:12.396424  294890 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:14:12.396539  294890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:14:12.397010  294890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:14:12.397217  294890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:14:12.397356  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:14:12.397599  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:12.397628  294890 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:14:12.397720  294890 addons.go:70] Setting yakd=true in profile "addons-461635"
	I1108 09:14:12.397738  294890 addons.go:239] Setting addon yakd=true in "addons-461635"
	I1108 09:14:12.397761  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.398225  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.398547  294890 addons.go:70] Setting inspektor-gadget=true in profile "addons-461635"
	I1108 09:14:12.398564  294890 addons.go:239] Setting addon inspektor-gadget=true in "addons-461635"
	I1108 09:14:12.398586  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.399005  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.399598  294890 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-461635"
	I1108 09:14:12.399626  294890 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-461635"
	I1108 09:14:12.399664  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.400226  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.400531  294890 addons.go:70] Setting metrics-server=true in profile "addons-461635"
	I1108 09:14:12.400557  294890 addons.go:239] Setting addon metrics-server=true in "addons-461635"
	I1108 09:14:12.400581  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.401075  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.407197  294890 addons.go:70] Setting cloud-spanner=true in profile "addons-461635"
	I1108 09:14:12.407236  294890 addons.go:239] Setting addon cloud-spanner=true in "addons-461635"
	I1108 09:14:12.407269  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.407733  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.407869  294890 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-461635"
	I1108 09:14:12.407889  294890 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-461635"
	I1108 09:14:12.407909  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.408297  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.413862  294890 addons.go:70] Setting registry=true in profile "addons-461635"
	I1108 09:14:12.413899  294890 addons.go:239] Setting addon registry=true in "addons-461635"
	I1108 09:14:12.413934  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.414401  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.424582  294890 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-461635"
	I1108 09:14:12.424702  294890 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-461635"
	I1108 09:14:12.424761  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.425300  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.434821  294890 addons.go:70] Setting default-storageclass=true in profile "addons-461635"
	I1108 09:14:12.434865  294890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-461635"
	I1108 09:14:12.435217  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.454071  294890 addons.go:70] Setting registry-creds=true in profile "addons-461635"
	I1108 09:14:12.454279  294890 addons.go:70] Setting gcp-auth=true in profile "addons-461635"
	I1108 09:14:12.454306  294890 mustload.go:66] Loading cluster: addons-461635
	I1108 09:14:12.454531  294890 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:12.454806  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.460318  294890 addons.go:239] Setting addon registry-creds=true in "addons-461635"
	I1108 09:14:12.460378  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.464547  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.487129  294890 addons.go:70] Setting storage-provisioner=true in profile "addons-461635"
	I1108 09:14:12.487187  294890 addons.go:239] Setting addon storage-provisioner=true in "addons-461635"
	I1108 09:14:12.487240  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.487863  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.495370  294890 addons.go:70] Setting ingress=true in profile "addons-461635"
	I1108 09:14:12.495414  294890 addons.go:239] Setting addon ingress=true in "addons-461635"
	I1108 09:14:12.542275  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.542864  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.558795  294890 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:14:12.562259  294890 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:14:12.562282  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:14:12.562356  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.501288  294890 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-461635"
	I1108 09:14:12.576923  294890 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-461635"
	I1108 09:14:12.577302  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.501301  294890 addons.go:70] Setting volcano=true in profile "addons-461635"
	I1108 09:14:12.592223  294890 addons.go:239] Setting addon volcano=true in "addons-461635"
	I1108 09:14:12.592266  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.592754  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.501308  294890 addons.go:70] Setting volumesnapshots=true in profile "addons-461635"
	I1108 09:14:12.619089  294890 addons.go:239] Setting addon volumesnapshots=true in "addons-461635"
	I1108 09:14:12.619185  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.619794  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.632862  294890 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:14:12.636123  294890 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:14:12.636167  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:14:12.636258  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.652066  294890 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:14:12.652276  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:14:12.501508  294890 out.go:179] * Verifying Kubernetes components...
	I1108 09:14:12.510962  294890 addons.go:70] Setting ingress-dns=true in profile "addons-461635"
	I1108 09:14:12.654440  294890 addons.go:239] Setting addon default-storageclass=true in "addons-461635"
	I1108 09:14:12.654487  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.655053  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.706758  294890 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:14:12.706781  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:14:12.706857  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.712461  294890 addons.go:239] Setting addon ingress-dns=true in "addons-461635"
	I1108 09:14:12.712541  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.713247  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.733996  294890 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:14:12.737011  294890 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:14:12.741036  294890 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:14:12.741152  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.745181  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:14:12.745208  294890 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:14:12.745322  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.759877  294890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:14:12.760009  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:14:12.741083  294890 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:14:12.737079  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:14:12.795607  294890 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:14:12.795695  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.799495  294890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:14:12.807771  294890 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:14:12.741044  294890 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:14:12.807881  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:14:12.807943  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	W1108 09:14:12.809206  294890 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:14:12.809408  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:12.811203  294890 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-461635"
	I1108 09:14:12.811683  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:12.812134  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:12.818562  294890 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:14:12.820501  294890 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:14:12.820523  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:14:12.820656  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.829103  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:12.829544  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:12.830587  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:14:12.830762  294890 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:14:12.832633  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:14:12.832720  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.863206  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 09:14:12.863479  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:12.832015  294890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:14:12.863816  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:14:12.863889  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.889929  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1108 09:14:12.893085  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:14:12.893317  294890 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:14:12.893334  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:14:12.893398  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.919685  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:14:12.919837  294890 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:14:12.919876  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:14:12.922873  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:14:12.922906  294890 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:14:12.922979  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.927429  294890 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:14:12.927450  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:14:12.927543  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.950532  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:14:12.959227  294890 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:14:12.963842  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:14:12.963865  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:14:12.963938  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.964212  294890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:14:12.964226  294890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:14:12.964267  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:12.994719  294890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:14:12.996139  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.007726  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.008582  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.018245  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.073628  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.074709  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.078724  294890 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:14:13.085450  294890 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:14:13.093120  294890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:14:13.093145  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:14:13.093209  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:13.101045  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.118701  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.143629  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.153956  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	W1108 09:14:13.157677  294890 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:14:13.157712  294890 retry.go:31] will retry after 223.376922ms: ssh: handshake failed: EOF
	I1108 09:14:13.159723  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.165901  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:13.170576  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	W1108 09:14:13.172251  294890 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:14:13.172277  294890 retry.go:31] will retry after 162.087618ms: ssh: handshake failed: EOF
	I1108 09:14:13.380567  294890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:14:13.679286  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:14:13.697935  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:14:13.734983  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:14:13.819566  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:14:13.873506  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:14:13.873596  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:14:13.938516  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:14:13.940815  294890 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:14:13.940885  294890 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:14:13.998993  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:14:13.999073  294890 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:14:14.002798  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:14:14.002874  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:14:14.015271  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:14:14.023295  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:14:14.042596  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:14:14.042672  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:14:14.072221  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:14:14.074559  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:14:14.163677  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:14:14.163754  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:14:14.164116  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:14:14.181581  294890 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:14:14.181660  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:14:14.197414  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:14:14.197490  294890 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:14:14.201711  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:14:14.201790  294890 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:14:14.204655  294890 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:14:14.204730  294890 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:14:14.351226  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:14:14.353986  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:14:14.354061  294890 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:14:14.355956  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:14:14.356031  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:14:14.372560  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:14:14.372638  294890 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:14:14.377907  294890 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:14:14.377980  294890 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:14:14.512035  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:14:14.564605  294890 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:14:14.564629  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:14:14.581185  294890 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:14.581209  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:14:14.582334  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:14:14.582355  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:14:14.698651  294890 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.703888419s)
	I1108 09:14:14.698684  294890 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:14:14.699629  294890 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318992729s)
	I1108 09:14:14.700237  294890 node_ready.go:35] waiting up to 6m0s for node "addons-461635" to be "Ready" ...
	I1108 09:14:14.768633  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:14:14.768706  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:14:14.772862  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:14.877148  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:14:14.891917  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.212579954s)
	I1108 09:14:15.085133  294890 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:14:15.085217  294890 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:14:15.212426  294890 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-461635" context rescaled to 1 replicas
	I1108 09:14:15.373114  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:14:15.373184  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:14:15.524334  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:14:15.524411  294890 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:14:15.721571  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:14:15.721596  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:14:15.894216  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.196195449s)
	I1108 09:14:15.894329  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.159272953s)
	I1108 09:14:15.894404  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.074765548s)
	I1108 09:14:15.894459  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.955880288s)
	I1108 09:14:15.933397  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:14:15.933424  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:14:16.074991  294890 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:14:16.075024  294890 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:14:16.315128  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1108 09:14:16.719838  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:17.175033  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.159677897s)
	I1108 09:14:17.175185  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.151820904s)
	I1108 09:14:18.797566  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.725263969s)
	I1108 09:14:18.798058  294890 addons.go:480] Verifying addon ingress=true in "addons-461635"
	I1108 09:14:18.797701  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.723052598s)
	I1108 09:14:18.797747  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.633580225s)
	I1108 09:14:18.797772  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.446474073s)
	I1108 09:14:18.798282  294890 addons.go:480] Verifying addon registry=true in "addons-461635"
	I1108 09:14:18.797820  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.285714158s)
	I1108 09:14:18.798814  294890 addons.go:480] Verifying addon metrics-server=true in "addons-461635"
	I1108 09:14:18.797900  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.024892614s)
	W1108 09:14:18.798856  294890 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:14:18.798871  294890 retry.go:31] will retry after 238.442509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:14:18.797928  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.920703279s)
	I1108 09:14:18.802500  294890 out.go:179] * Verifying ingress addon...
	I1108 09:14:18.802504  294890 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-461635 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:14:18.802613  294890 out.go:179] * Verifying registry addon...
	I1108 09:14:18.806063  294890 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:14:18.808805  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 09:14:18.813105  294890 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:14:18.813180  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:18.816377  294890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:14:18.816445  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:19.034933  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.719707551s)
	I1108 09:14:19.035018  294890 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-461635"
	I1108 09:14:19.038078  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:14:19.038218  294890 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:14:19.041795  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:14:19.056702  294890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:14:19.056766  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:19.203354  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:19.310142  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:19.312490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:19.545411  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:19.809594  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:19.811428  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:20.046020  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:20.310348  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:20.312491  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:20.353753  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:14:20.353866  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:20.370773  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:20.481530  294890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:14:20.494880  294890 addons.go:239] Setting addon gcp-auth=true in "addons-461635"
	I1108 09:14:20.494929  294890 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:14:20.495389  294890 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:14:20.511927  294890 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:14:20.511978  294890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:14:20.530413  294890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:14:20.546294  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:20.809536  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:20.811549  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:21.046240  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:21.203871  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:21.311016  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:21.312028  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:21.547005  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:21.740815  294890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.702677796s)
	I1108 09:14:21.740885  294890 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.228939452s)
	I1108 09:14:21.743923  294890 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 09:14:21.746692  294890 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:14:21.749553  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:14:21.749586  294890 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:14:21.763701  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:14:21.763731  294890 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:14:21.776307  294890 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:14:21.776374  294890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:14:21.790143  294890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:14:21.809839  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:21.812245  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:22.045988  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:22.296690  294890 addons.go:480] Verifying addon gcp-auth=true in "addons-461635"
	I1108 09:14:22.300127  294890 out.go:179] * Verifying gcp-auth addon...
	I1108 09:14:22.303673  294890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:14:22.306595  294890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:14:22.306614  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:22.308967  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:22.311511  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:22.546917  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:22.806609  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:22.808874  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:22.811238  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:23.045560  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:23.307667  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:23.310749  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:23.311791  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:23.544677  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:23.703507  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:23.807980  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:23.809419  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:23.811700  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:24.044718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:24.306618  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:24.308822  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:24.311203  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:24.544995  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:24.807431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:24.809664  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:24.811504  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:25.044822  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:25.307049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:25.308639  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:25.312115  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:25.545352  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:25.807017  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:25.809309  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:25.811367  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:26.045474  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:26.204248  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:26.307045  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:26.308966  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:26.311105  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:26.545443  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:26.807062  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:26.808897  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:26.810947  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:27.044695  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:27.308265  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:27.310029  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:27.311716  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:27.544442  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:27.807151  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:27.808540  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:27.811827  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:28.044581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:28.307366  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:28.309174  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:28.311434  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:28.545812  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:28.703724  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:28.806352  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:28.808723  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:28.812296  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:29.045172  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:29.309635  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:29.309865  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:29.311718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:29.544631  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:29.807239  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:29.809563  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:29.811448  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:30.045927  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:30.307040  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:30.308862  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:30.311049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:30.545265  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:30.807191  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:30.809001  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:30.810877  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:31.044931  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:31.203797  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:31.307766  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:31.308686  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:31.311656  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:31.545585  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:31.807416  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:31.809445  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:31.811325  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:32.045862  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:32.307315  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:32.310261  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:32.311194  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:32.545319  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:32.807161  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:32.808801  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:32.811171  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:33.045244  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:33.307645  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:33.309525  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:33.311266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:33.549929  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:33.703770  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:33.806645  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:33.808752  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:33.812186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:34.045207  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:34.306920  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:34.308848  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:34.311291  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:34.545638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:34.807199  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:34.809357  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:34.811137  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:35.045236  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:35.308250  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:35.309550  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:35.311524  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:35.546647  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:35.806505  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:35.808639  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:35.811865  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:36.044673  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:36.203680  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:36.306644  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:36.310592  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:36.312380  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:36.545821  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:36.806621  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:36.808845  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:36.811130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:37.045148  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:37.307581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:37.310491  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:37.311440  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:37.545346  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:37.806639  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:37.808618  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:37.811999  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:38.045182  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:38.306963  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:38.309164  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:38.311279  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:38.545263  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:38.703272  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:38.807391  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:38.809998  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:38.811881  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:39.044599  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:39.307915  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:39.309498  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:39.311380  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:39.545321  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:39.806671  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:39.808738  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:39.811922  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:40.047760  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:40.306909  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:40.317237  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:40.318487  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:40.545820  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:40.703579  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:40.807748  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:40.811590  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:40.812467  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:41.045473  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:41.307962  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:41.310000  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:41.311783  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:41.544904  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:41.807064  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:41.808825  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:41.811010  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:42.045255  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:42.307475  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:42.310091  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:42.312377  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:42.545665  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:42.703971  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:42.807600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:42.810550  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:42.812821  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:43.044794  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:43.306753  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:43.309507  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:43.311650  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:43.544568  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:43.808335  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:43.810761  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:43.811867  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:44.044750  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:44.306886  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:44.309608  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:44.311348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:44.545768  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:44.806853  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:44.816360  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:44.818937  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:45.046596  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:45.203758  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:45.311348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:45.311804  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:45.314212  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:45.545142  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:45.808638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:45.809323  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:45.811469  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:46.045951  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:46.307362  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:46.309579  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:46.311605  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:46.545000  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:46.807679  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:46.809193  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:46.810942  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:47.045561  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:47.306819  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:47.309165  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:47.311720  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:47.545827  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:47.703913  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:47.806839  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:47.808981  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:47.811156  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:48.045527  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:48.310041  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:48.310054  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:48.312191  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:48.545310  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:48.807287  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:48.809859  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:48.811695  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:49.044646  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:49.307282  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:49.309613  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:49.311969  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:49.544724  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:49.807037  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:49.809511  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:49.811494  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:50.045759  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:50.204492  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:50.307604  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:50.311777  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:50.313122  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:50.545444  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:50.807342  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:50.809448  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:50.811445  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:51.045332  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:51.307596  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:51.310023  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:51.312095  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:51.545100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:51.808482  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:51.809429  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:51.812266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:52.045383  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:52.307331  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:52.309447  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:52.311872  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:52.545092  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:14:52.703022  294890 node_ready.go:57] node "addons-461635" has "Ready":"False" status (will retry)
	I1108 09:14:52.807213  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:52.809543  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:52.811559  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.045826  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:53.306764  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:53.308882  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:53.311156  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.545290  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:53.822122  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:53.823019  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:53.823599  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:54.066249  294890 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:14:54.066275  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:54.235166  294890 node_ready.go:49] node "addons-461635" is "Ready"
	I1108 09:14:54.235196  294890 node_ready.go:38] duration metric: took 39.534933836s for node "addons-461635" to be "Ready" ...
	I1108 09:14:54.235211  294890 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:14:54.235266  294890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:14:54.293549  294890 api_server.go:72] duration metric: took 41.896301414s to wait for apiserver process to appear ...
	I1108 09:14:54.293575  294890 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:14:54.293595  294890 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:14:54.322542  294890 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:14:54.323862  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:54.324851  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:54.324963  294890 api_server.go:141] control plane version: v1.34.1
	I1108 09:14:54.324984  294890 api_server.go:131] duration metric: took 31.402508ms to wait for apiserver health ...
	I1108 09:14:54.324994  294890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:14:54.325619  294890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:14:54.325641  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:54.330846  294890 system_pods.go:59] 19 kube-system pods found
	I1108 09:14:54.330882  294890 system_pods.go:61] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.330892  294890 system_pods.go:61] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.330900  294890 system_pods.go:61] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending
	I1108 09:14:54.330905  294890 system_pods.go:61] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending
	I1108 09:14:54.330910  294890 system_pods.go:61] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.330914  294890 system_pods.go:61] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.330924  294890 system_pods.go:61] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.330931  294890 system_pods.go:61] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.330945  294890 system_pods.go:61] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.330951  294890 system_pods.go:61] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.330962  294890 system_pods.go:61] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.330969  294890 system_pods.go:61] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.330982  294890 system_pods.go:61] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.330989  294890 system_pods.go:61] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.330999  294890 system_pods.go:61] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.331004  294890 system_pods.go:61] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending
	I1108 09:14:54.331011  294890 system_pods.go:61] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending
	I1108 09:14:54.331016  294890 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending
	I1108 09:14:54.331023  294890 system_pods.go:61] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending
	I1108 09:14:54.331028  294890 system_pods.go:74] duration metric: took 6.029053ms to wait for pod list to return data ...
	I1108 09:14:54.331044  294890 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:14:54.336258  294890 default_sa.go:45] found service account: "default"
	I1108 09:14:54.336287  294890 default_sa.go:55] duration metric: took 5.236626ms for default service account to be created ...
	I1108 09:14:54.336306  294890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:14:54.354648  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:54.354685  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.354703  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.354709  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending
	I1108 09:14:54.354714  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending
	I1108 09:14:54.354719  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.354724  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.354731  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.354736  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.354750  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.354755  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.354760  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.354781  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.354786  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.354801  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.354808  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.354812  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending
	I1108 09:14:54.354816  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending
	I1108 09:14:54.354821  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending
	I1108 09:14:54.354825  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending
	I1108 09:14:54.354842  294890 retry.go:31] will retry after 233.17935ms: missing components: kube-dns
	I1108 09:14:54.555130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:54.615046  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:54.615091  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:54.615102  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:54.615109  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:54.615120  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:54.615125  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:54.615131  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:54.615135  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:54.615147  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:54.615154  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:54.615171  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:54.615184  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:54.615196  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:54.615204  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending
	I1108 09:14:54.615211  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:54.615223  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:54.615229  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:54.615250  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:54.615263  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:54.615269  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:54.615290  294890 retry.go:31] will retry after 382.644084ms: missing components: kube-dns
	I1108 09:14:54.807035  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:54.907928  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:54.908163  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.003592  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.003654  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:55.003666  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.003677  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.003686  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.003690  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.003696  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.003700  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.003715  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.003723  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.003727  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.003732  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.003738  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.003745  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.003751  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.003760  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.003780  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.003788  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.003802  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.003808  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:55.003943  294890 retry.go:31] will retry after 376.455888ms: missing components: kube-dns
	I1108 09:14:55.045843  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:55.310431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:55.310800  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.313456  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:55.385872  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.385917  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:14:55.385925  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.385933  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.385942  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.385947  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.385964  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.385976  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.385980  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.385987  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.385997  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.386003  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.386010  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.386020  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.386037  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.386049  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.386055  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.386062  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.386074  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.386080  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:14:55.386101  294890 retry.go:31] will retry after 424.221664ms: missing components: kube-dns
	I1108 09:14:55.549034  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:55.810007  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:55.810170  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:55.812257  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:55.815068  294890 system_pods.go:86] 19 kube-system pods found
	I1108 09:14:55.815098  294890 system_pods.go:89] "coredns-66bc5c9577-bj8nx" [7043fb20-df1b-4801-b776-a1f99482a068] Running
	I1108 09:14:55.815108  294890 system_pods.go:89] "csi-hostpath-attacher-0" [5a71e205-b3b2-4e5c-aae3-431f1e592c03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:14:55.815114  294890 system_pods.go:89] "csi-hostpath-resizer-0" [27deb37e-fc3b-4c5b-81fc-c76e0ba0ab26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:14:55.815124  294890 system_pods.go:89] "csi-hostpathplugin-z6vwk" [92cde193-906d-4db1-a6c5-f68bf3ebc3b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:14:55.815130  294890 system_pods.go:89] "etcd-addons-461635" [8b18c652-0f71-4b53-81ef-481b2cea4d8d] Running
	I1108 09:14:55.815135  294890 system_pods.go:89] "kindnet-rtsff" [cb1e0540-d22c-4011-9ae7-ab19942a08ca] Running
	I1108 09:14:55.815140  294890 system_pods.go:89] "kube-apiserver-addons-461635" [d922665f-e20e-497c-8570-5db72badd254] Running
	I1108 09:14:55.815144  294890 system_pods.go:89] "kube-controller-manager-addons-461635" [d043ca93-3440-4c62-acf2-69987e3f3e55] Running
	I1108 09:14:55.815151  294890 system_pods.go:89] "kube-ingress-dns-minikube" [c8a0b48b-0f89-4c9a-8f3f-6793646ff108] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:14:55.815160  294890 system_pods.go:89] "kube-proxy-2b5dx" [f9d2fe81-2af0-48bb-8765-057d1b529853] Running
	I1108 09:14:55.815167  294890 system_pods.go:89] "kube-scheduler-addons-461635" [ab42e6d0-caf8-4fa0-8237-000b3cfb7ab6] Running
	I1108 09:14:55.815174  294890 system_pods.go:89] "metrics-server-85b7d694d7-7rj8w" [ac57c542-0bd0-4ec2-b7df-8e06bf8aa809] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:14:55.815186  294890 system_pods.go:89] "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:14:55.815192  294890 system_pods.go:89] "registry-6b586f9694-6xz6d" [47229ed5-0985-4ecb-bfe3-2ac44b6a7e6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:14:55.815202  294890 system_pods.go:89] "registry-creds-764b6fb674-ch6rs" [5041a3e3-5361-4b5f-bedc-7578fd1e27c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:14:55.815209  294890 system_pods.go:89] "registry-proxy-7g9lx" [a506ebf6-8ac1-4673-98bc-081a54687896] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:14:55.815217  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-67l2n" [a84d46df-18e8-4ed0-b440-bac895299a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.815225  294890 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8nmj" [0601684a-cf9e-44fe-8a08-573f0bbb4cf0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:14:55.815234  294890 system_pods.go:89] "storage-provisioner" [a0cce3a8-4f0f-421d-9cfb-c46916c3bea8] Running
	I1108 09:14:55.815242  294890 system_pods.go:126] duration metric: took 1.478929602s to wait for k8s-apps to be running ...
	I1108 09:14:55.815254  294890 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:14:55.815307  294890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:14:55.829840  294890 system_svc.go:56] duration metric: took 14.576352ms WaitForService to wait for kubelet
	I1108 09:14:55.829919  294890 kubeadm.go:587] duration metric: took 43.432675695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:14:55.829954  294890 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:14:55.833394  294890 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:14:55.833471  294890 node_conditions.go:123] node cpu capacity is 2
	I1108 09:14:55.833499  294890 node_conditions.go:105] duration metric: took 3.523336ms to run NodePressure ...
	I1108 09:14:55.833524  294890 start.go:242] waiting for startup goroutines ...
	I1108 09:14:56.047718  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:56.306801  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:56.309255  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:56.311768  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:56.553079  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:56.808563  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:56.809527  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:56.811638  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:57.046692  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:57.319414  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:57.321364  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:57.322120  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:57.546430  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:57.811326  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:57.811776  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:57.814860  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:58.046574  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:58.318714  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:58.318913  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:58.319019  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:58.555584  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:58.807374  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:58.811154  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:58.813486  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:59.051238  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:59.308480  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:59.309888  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:59.311591  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:14:59.545684  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:14:59.808328  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:14:59.811706  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:14:59.814552  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.068668  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:00.322248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.354824  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:00.355330  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:00.546842  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:00.809950  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:00.813769  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:00.813822  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.047403  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:01.307278  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:01.309883  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.312244  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:01.545831  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:01.809820  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:01.809935  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:01.812248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:02.045912  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:02.306523  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:02.309199  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:02.311787  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:02.546737  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:02.806698  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:02.809311  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:02.811618  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:03.047348  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:03.310890  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:03.312269  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:03.313047  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:03.545268  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:03.811289  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:03.811801  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:03.812320  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:04.045963  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:04.308457  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:04.310349  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:04.312248  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:04.545764  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:04.810306  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:04.810765  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:04.814044  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:05.047262  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:05.309181  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:05.313226  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:05.314123  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:05.546145  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:05.807519  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:05.809950  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:05.813031  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.045513  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:06.310038  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:06.310562  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:06.312836  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.546174  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:06.809242  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:06.812690  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:06.813130  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.046266  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:07.307950  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:07.310068  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.311813  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:07.544699  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:07.807309  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:07.809431  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:07.811302  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:08.046059  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:08.307538  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:08.310221  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:08.312338  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:08.545712  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:08.809024  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:08.809825  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:08.811941  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.052489  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:09.313564  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:09.315460  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:09.315597  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.546706  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:09.828693  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:09.828830  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:09.828882  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.050797  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:10.307636  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.309930  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:10.312472  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:10.546762  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:10.807944  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:10.810058  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:10.815904  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:11.048172  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:11.307290  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:11.310944  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:11.312789  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:11.545049  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:11.830893  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:11.831068  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:11.831133  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:12.045456  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:12.308232  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:12.312335  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:12.313643  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:12.545391  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:12.807485  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:12.810116  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:12.812206  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:13.046818  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:13.309496  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:13.309632  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:13.311829  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:13.545100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:13.808471  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:13.810963  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:13.812351  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:14.046791  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:14.308105  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:14.317425  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:14.319049  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:14.545409  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:14.809155  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:14.811878  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:14.812129  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:15.047230  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:15.307487  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:15.312261  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:15.314075  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:15.545186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:15.808107  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:15.861012  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:15.861102  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:16.045830  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:16.307116  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:16.309796  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:16.311899  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:15:16.545947  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:16.808175  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:16.809208  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:16.810955  294890 kapi.go:107] duration metric: took 58.002150717s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:15:17.045644  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:17.306980  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:17.309007  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:17.545722  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:17.808651  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:17.809707  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:18.046016  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:18.308482  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:18.316810  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:18.546116  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:18.810023  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:18.810883  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:19.046121  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:19.308114  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:19.311457  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:19.547215  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:19.822045  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:19.826648  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:20.046976  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:20.307431  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:20.309844  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:20.545494  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:20.807022  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:20.809629  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:21.045622  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:21.308297  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:21.309788  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:21.545713  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:21.807951  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:21.809218  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:22.045490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:22.306747  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:22.309064  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:22.545447  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:22.816052  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:22.816235  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:23.045733  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:23.306625  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:23.308901  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:23.545899  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:23.807038  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:23.813414  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:24.045790  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:24.307440  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:24.309891  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:24.545186  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:24.807334  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:24.809687  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:25.047389  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:25.308368  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:25.310644  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:25.545581  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:25.808664  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:25.809330  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:26.045975  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:26.307887  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:26.309270  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:26.546287  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:26.809975  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:26.810154  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:27.045142  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:27.309812  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:27.309998  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:27.545114  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:27.807373  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:27.809795  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:28.045300  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:28.307748  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:28.310142  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:28.545421  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:28.808971  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:28.810279  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:29.046479  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:29.309333  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:29.311321  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:29.546197  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:29.807615  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:29.809889  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:30.065725  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:30.307350  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:30.311057  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:30.547858  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:30.808154  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:30.809571  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:31.044965  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:31.307658  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:31.309950  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:31.545886  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:31.807187  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:31.809669  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:32.045205  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:32.308130  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:32.311199  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:32.546347  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:32.807190  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:32.810314  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:33.046320  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:33.309490  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:33.311245  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:33.545682  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:33.806916  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:33.809457  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:34.046788  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:34.307543  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:34.310013  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:34.546481  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:34.807461  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:34.810790  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:35.048360  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:35.307776  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:35.310861  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:35.546390  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:35.806990  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:35.808984  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:36.045600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:36.306685  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:36.309198  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:36.545825  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:36.807086  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:36.809305  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:37.045956  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:37.309897  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:37.310864  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:37.545499  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:37.807936  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:37.809412  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.046503  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:38.306632  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:15:38.308876  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.545574  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:38.808431  294890 kapi.go:107] duration metric: took 1m16.50475823s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:15:38.810960  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:38.811582  294890 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-461635 cluster.
	I1108 09:15:38.814617  294890 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:15:38.817467  294890 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:15:39.045514  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:39.310925  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:39.546600  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:39.810354  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:40.046099  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:40.310188  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:40.545924  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:40.809699  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:41.050473  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:41.310228  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:41.545451  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:41.809714  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:42.045398  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:42.310083  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:42.545305  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:42.811620  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:43.044800  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:43.310341  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:43.545860  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:43.809301  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:44.047227  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:44.309378  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:44.545686  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:44.810406  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:45.057343  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:45.312121  294890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:15:45.546384  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:45.811767  294890 kapi.go:107] duration metric: took 1m27.005701154s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:15:46.045877  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:46.545866  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:47.046667  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:47.546065  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:48.046316  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:48.545284  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:49.061173  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:49.546066  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:50.047284  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:50.545747  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:51.046271  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:51.547213  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:52.046066  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:52.545872  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:53.048680  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:53.546100  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:54.046341  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:54.545563  294890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:15:55.046348  294890 kapi.go:107] duration metric: took 1m36.004550257s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:15:55.049532  294890 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, nvidia-device-plugin, default-storageclass, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1108 09:15:55.052534  294890 addons.go:515] duration metric: took 1m42.654879835s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds nvidia-device-plugin default-storageclass storage-provisioner ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1108 09:15:55.052607  294890 start.go:247] waiting for cluster config update ...
	I1108 09:15:55.052633  294890 start.go:256] writing updated cluster config ...
	I1108 09:15:55.053002  294890 ssh_runner.go:195] Run: rm -f paused
	I1108 09:15:55.058717  294890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:15:55.062489  294890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bj8nx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.067650  294890 pod_ready.go:94] pod "coredns-66bc5c9577-bj8nx" is "Ready"
	I1108 09:15:55.067686  294890 pod_ready.go:86] duration metric: took 5.16689ms for pod "coredns-66bc5c9577-bj8nx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.070121  294890 pod_ready.go:83] waiting for pod "etcd-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.075324  294890 pod_ready.go:94] pod "etcd-addons-461635" is "Ready"
	I1108 09:15:55.075393  294890 pod_ready.go:86] duration metric: took 5.187691ms for pod "etcd-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.077818  294890 pod_ready.go:83] waiting for pod "kube-apiserver-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.083268  294890 pod_ready.go:94] pod "kube-apiserver-addons-461635" is "Ready"
	I1108 09:15:55.083349  294890 pod_ready.go:86] duration metric: took 5.497077ms for pod "kube-apiserver-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.086861  294890 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.463630  294890 pod_ready.go:94] pod "kube-controller-manager-addons-461635" is "Ready"
	I1108 09:15:55.463677  294890 pod_ready.go:86] duration metric: took 376.763895ms for pod "kube-controller-manager-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:55.663658  294890 pod_ready.go:83] waiting for pod "kube-proxy-2b5dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.062817  294890 pod_ready.go:94] pod "kube-proxy-2b5dx" is "Ready"
	I1108 09:15:56.062849  294890 pod_ready.go:86] duration metric: took 399.161509ms for pod "kube-proxy-2b5dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.263098  294890 pod_ready.go:83] waiting for pod "kube-scheduler-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.663473  294890 pod_ready.go:94] pod "kube-scheduler-addons-461635" is "Ready"
	I1108 09:15:56.663566  294890 pod_ready.go:86] duration metric: took 400.439194ms for pod "kube-scheduler-addons-461635" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:15:56.663597  294890 pod_ready.go:40] duration metric: took 1.604846793s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:15:56.740653  294890 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 09:15:56.745706  294890 out.go:179] * Done! kubectl is now configured to use "addons-461635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:15:58 addons-461635 crio[833]: time="2025-11-08T09:15:58.317509297Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.579606653Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=16690cad-f354-44f4-bece-af41da7f421f name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.580365611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9615c596-3cc1-4080-8b18-b9c5514188ef name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.58218447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=563c37b4-f5f1-4895-b5b9-6c8313bff7bb name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.58843586Z" level=info msg="Creating container: default/busybox/busybox" id=6623b4dd-de41-41bc-b1c4-72db3c5d65ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.58858261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.596045977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.596607279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.61587833Z" level=info msg="Created container 24cb90f38a74698be643869db005e2719cc28ed5bdb135c1ba9f36ab3b34a2f9: default/busybox/busybox" id=6623b4dd-de41-41bc-b1c4-72db3c5d65ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.61706354Z" level=info msg="Starting container: 24cb90f38a74698be643869db005e2719cc28ed5bdb135c1ba9f36ab3b34a2f9" id=65a163ae-f81b-4ab3-8541-dbb280117ddd name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:00 addons-461635 crio[833]: time="2025-11-08T09:16:00.619166316Z" level=info msg="Started container" PID=4938 containerID=24cb90f38a74698be643869db005e2719cc28ed5bdb135c1ba9f36ab3b34a2f9 description=default/busybox/busybox id=65a163ae-f81b-4ab3-8541-dbb280117ddd name=/runtime.v1.RuntimeService/StartContainer sandboxID=61c2c0f589b6822535f05dcf0cc89878289374f15eb481f47dfc74c724411cc5
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.223331537Z" level=info msg="Removing container: 9db9d9205062ec5e1c33ded43526e58f6b65e1479c464746b1e7e7dd448e2491" id=6b0f9fdc-4982-45e0-a5c8-ef3b910dd1be name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.226061938Z" level=info msg="Error loading conmon cgroup of container 9db9d9205062ec5e1c33ded43526e58f6b65e1479c464746b1e7e7dd448e2491: cgroup deleted" id=6b0f9fdc-4982-45e0-a5c8-ef3b910dd1be name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.238209302Z" level=info msg="Removed container 9db9d9205062ec5e1c33ded43526e58f6b65e1479c464746b1e7e7dd448e2491: gcp-auth/gcp-auth-certs-create-5994v/create" id=6b0f9fdc-4982-45e0-a5c8-ef3b910dd1be name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.239897377Z" level=info msg="Removing container: 7ceca45a39e2af70af4b68bddd8b579a428fe6c593074b591946020792bde820" id=17fd6bc3-f2a6-42b9-a0c3-0e8fe542ec8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.242546891Z" level=info msg="Error loading conmon cgroup of container 7ceca45a39e2af70af4b68bddd8b579a428fe6c593074b591946020792bde820: cgroup deleted" id=17fd6bc3-f2a6-42b9-a0c3-0e8fe542ec8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.248799241Z" level=info msg="Removed container 7ceca45a39e2af70af4b68bddd8b579a428fe6c593074b591946020792bde820: gcp-auth/gcp-auth-certs-patch-jjtjm/patch" id=17fd6bc3-f2a6-42b9-a0c3-0e8fe542ec8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.252480225Z" level=info msg="Stopping pod sandbox: 550531f4934644e2d3e67888c2afde81698000ee65444f0f4216899401c5a281" id=537977db-ca4c-42b8-acfd-a5a1d94e9df0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.252655578Z" level=info msg="Stopped pod sandbox (already stopped): 550531f4934644e2d3e67888c2afde81698000ee65444f0f4216899401c5a281" id=537977db-ca4c-42b8-acfd-a5a1d94e9df0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.253879042Z" level=info msg="Removing pod sandbox: 550531f4934644e2d3e67888c2afde81698000ee65444f0f4216899401c5a281" id=702b616f-649a-48a0-9870-932adfe0996a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.266873176Z" level=info msg="Removed pod sandbox: 550531f4934644e2d3e67888c2afde81698000ee65444f0f4216899401c5a281" id=702b616f-649a-48a0-9870-932adfe0996a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.267750674Z" level=info msg="Stopping pod sandbox: 47c2accb08b6c79a76491aede358115d9a1ac122d275be36f8d973f2e9fd5cad" id=cef51c3d-a6a4-41b3-8800-58ba901eed3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.26792309Z" level=info msg="Stopped pod sandbox (already stopped): 47c2accb08b6c79a76491aede358115d9a1ac122d275be36f8d973f2e9fd5cad" id=cef51c3d-a6a4-41b3-8800-58ba901eed3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.26857424Z" level=info msg="Removing pod sandbox: 47c2accb08b6c79a76491aede358115d9a1ac122d275be36f8d973f2e9fd5cad" id=eeb0c6eb-eabc-4c29-89be-5b3b63c06dbd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:16:07 addons-461635 crio[833]: time="2025-11-08T09:16:07.277643712Z" level=info msg="Removed pod sandbox: 47c2accb08b6c79a76491aede358115d9a1ac122d275be36f8d973f2e9fd5cad" id=eeb0c6eb-eabc-4c29-89be-5b3b63c06dbd name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	24cb90f38a746       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          7 seconds ago        Running             busybox                                  0                   61c2c0f589b68       busybox                                     default
	cec2fa3f2818c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          13 seconds ago       Running             csi-snapshotter                          0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	e75bb914088f3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          14 seconds ago       Running             csi-provisioner                          0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	b7edd2dbe2ee2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            16 seconds ago       Running             liveness-probe                           0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	0f65082d20771       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           17 seconds ago       Running             hostpath                                 0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	250d2962750d9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            19 seconds ago       Running             gadget                                   0                   53561e106973e       gadget-tg2w5                                gadget
	5b138abbcda7d       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             23 seconds ago       Running             controller                               0                   d50058827b30f       ingress-nginx-controller-675c5ddd98-sk8px   ingress-nginx
	cef07699d0d39       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 29 seconds ago       Running             gcp-auth                                 0                   55597fff21533       gcp-auth-78565c9fb4-gvq8l                   gcp-auth
	4a60c053ded9b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                32 seconds ago       Running             node-driver-registrar                    0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	7adfc46b5e895       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               34 seconds ago       Running             minikube-ingress-dns                     0                   01aa05a2e1595       kube-ingress-dns-minikube                   kube-system
	4eaa15ebd9dc0       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     42 seconds ago       Running             nvidia-device-plugin-ctr                 0                   63e550ba57535       nvidia-device-plugin-daemonset-fdnsr        kube-system
	7ae2515ace62b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               46 seconds ago       Running             cloud-spanner-emulator                   0                   e96f15f7472ba       cloud-spanner-emulator-6f9fcf858b-67xhk     default
	f6ad305097e58       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              51 seconds ago       Running             registry-proxy                           0                   3a5ef86e0d5bf       registry-proxy-7g9lx                        kube-system
	e243ab620e43b       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           56 seconds ago       Running             registry                                 0                   8cf71cae4c02d       registry-6b586f9694-6xz6d                   kube-system
	9564a18f3ee6e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      58 seconds ago       Running             volume-snapshot-controller               0                   51bd23c3e84e3       snapshot-controller-7d9fbc56b8-g8nmj        kube-system
	20fd1a8f9fead       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   58 seconds ago       Exited              patch                                    0                   4a5c4f71459ba       ingress-nginx-admission-patch-f9wtz         ingress-nginx
	d4439fc0f8e18       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   59 seconds ago       Exited              create                                   0                   df8317593172b       ingress-nginx-admission-create-ld89t        ingress-nginx
	6e683377d4d46       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        59 seconds ago       Running             metrics-server                           0                   55362cf653862       metrics-server-85b7d694d7-7rj8w             kube-system
	3bc84587627cb       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   87f47320987c0       yakd-dashboard-5ff678cb9-jdt2n              yakd-dashboard
	f06d60de07926       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9543a9339b6c5       snapshot-controller-7d9fbc56b8-67l2n        kube-system
	bc405f71c0f23       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   6ebbb57263b67       csi-hostpathplugin-z6vwk                    kube-system
	ec999739a0e69       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   5a034617afed0       csi-hostpath-resizer-0                      kube-system
	94ac10889e7e5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   8f80e702f96f2       local-path-provisioner-648f6765c9-t7jnl     local-path-storage
	735a5e20ff11f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   e2b16663885e9       csi-hostpath-attacher-0                     kube-system
	bc2c816611acc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   63ccc2bbbfc81       storage-provisioner                         kube-system
	915d95faab44d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   e8177a5212fa9       coredns-66bc5c9577-bj8nx                    kube-system
	d901d07c82588       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             About a minute ago   Running             kube-proxy                               0                   9f0919d28558b       kube-proxy-2b5dx                            kube-system
	537bc857d2209       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   02dee02e78c10       kindnet-rtsff                               kube-system
	2eaec8104c429       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   0bcc4010ea35c       kube-scheduler-addons-461635                kube-system
	deacd1133d379       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d08b6d802bd27       kube-controller-manager-addons-461635       kube-system
	69e30ea1fe4ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   06b0510d551c3       kube-apiserver-addons-461635                kube-system
	79cffc5046936       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   7640bb325d860       etcd-addons-461635                          kube-system
	
	
	==> coredns [915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca] <==
	[INFO] 10.244.0.15:51967 - 6559 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009797s
	[INFO] 10.244.0.15:51967 - 41773 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00262397s
	[INFO] 10.244.0.15:51967 - 28265 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002516425s
	[INFO] 10.244.0.15:51967 - 70 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000214214s
	[INFO] 10.244.0.15:51967 - 38621 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000155383s
	[INFO] 10.244.0.15:57444 - 7382 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149942s
	[INFO] 10.244.0.15:57444 - 7135 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126762s
	[INFO] 10.244.0.15:40551 - 23851 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084046s
	[INFO] 10.244.0.15:40551 - 23670 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000187464s
	[INFO] 10.244.0.15:39933 - 8031 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095944s
	[INFO] 10.244.0.15:39933 - 7825 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164211s
	[INFO] 10.244.0.15:50290 - 34346 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00142989s
	[INFO] 10.244.0.15:50290 - 34141 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001467273s
	[INFO] 10.244.0.15:57164 - 47597 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000182255s
	[INFO] 10.244.0.15:57164 - 47128 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00042727s
	[INFO] 10.244.0.19:59482 - 40264 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156466s
	[INFO] 10.244.0.19:42019 - 24731 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191575s
	[INFO] 10.244.0.19:48528 - 48788 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102302s
	[INFO] 10.244.0.19:33460 - 11470 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000200478s
	[INFO] 10.244.0.19:59069 - 20403 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000246345s
	[INFO] 10.244.0.19:48940 - 19625 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166131s
	[INFO] 10.244.0.19:36498 - 61627 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002525697s
	[INFO] 10.244.0.19:47129 - 63874 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001921431s
	[INFO] 10.244.0.19:33603 - 17104 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001193884s
	[INFO] 10.244.0.19:52924 - 14532 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001940189s
	
	
	==> describe nodes <==
	Name:               addons-461635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-461635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=addons-461635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_14_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-461635
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-461635"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:14:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-461635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:15:39 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:15:39 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:15:39 +0000   Sat, 08 Nov 2025 09:14:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:15:39 +0000   Sat, 08 Nov 2025 09:14:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-461635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e197f1ed-5acc-41d9-9508-112a7409480b
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-6f9fcf858b-67xhk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-tg2w5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-gvq8l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sk8px    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         110s
	  kube-system                 coredns-66bc5c9577-bj8nx                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpathplugin-z6vwk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-461635                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-rtsff                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-addons-461635                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-addons-461635        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-2b5dx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-addons-461635                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-85b7d694d7-7rj8w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         111s
	  kube-system                 nvidia-device-plugin-daemonset-fdnsr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-6xz6d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-creds-764b6fb674-ch6rs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-7g9lx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-67l2n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-g8nmj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  local-path-storage          local-path-provisioner-648f6765c9-t7jnl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-jdt2n               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 114s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-461635 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-461635 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node addons-461635 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s                 kubelet          Node addons-461635 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s                 kubelet          Node addons-461635 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s                 kubelet          Node addons-461635 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           117s                 node-controller  Node addons-461635 event: Registered Node addons-461635 in Controller
	  Normal   NodeReady                75s                  kubelet          Node addons-461635 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014865] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.528312] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034771] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.823038] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.933277] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 8 08:21] hrtimer: interrupt took 14263725 ns
	[Nov 8 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[  +0.129013] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a] <==
	{"level":"warn","ts":"2025-11-08T09:14:02.728255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.763271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.785133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.822145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.862145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.885483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.903380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.946361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:02.981213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.041252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.077106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.120875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.165429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.180685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.206338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.275320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.293426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.316370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:03.443558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:19.368386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:19.383932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.413137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.427361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.472479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:14:41.481833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50312","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cef07699d0d3923c65c9264b7f7b78caef0279434b6e6c391a1ba8971d303b93] <==
	2025/11/08 09:15:38 GCP Auth Webhook started!
	2025/11/08 09:15:57 Ready to marshal response ...
	2025/11/08 09:15:57 Ready to write response ...
	2025/11/08 09:15:57 Ready to marshal response ...
	2025/11/08 09:15:57 Ready to write response ...
	2025/11/08 09:15:58 Ready to marshal response ...
	2025/11/08 09:15:58 Ready to write response ...
	
	
	==> kernel <==
	 09:16:08 up  1:58,  0 user,  load average: 4.13, 3.16, 3.26
	Linux addons-461635 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806] <==
	I1108 09:14:13.249133       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:14:13.249265       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:14:43.249898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:14:43.249907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:14:43.250012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:14:43.250105       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:14:44.849391       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:14:44.849426       1 metrics.go:72] Registering metrics
	I1108 09:14:44.849495       1 controller.go:711] "Syncing nftables rules"
	I1108 09:14:53.251555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:14:53.251614       1 main.go:301] handling current node
	I1108 09:15:03.245306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:03.245335       1 main.go:301] handling current node
	I1108 09:15:13.245206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:13.245274       1 main.go:301] handling current node
	I1108 09:15:23.245117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:23.245368       1 main.go:301] handling current node
	I1108 09:15:33.250445       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:33.250473       1 main.go:301] handling current node
	I1108 09:15:43.245575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:43.245607       1 main.go:301] handling current node
	I1108 09:15:53.244850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:15:53.244882       1 main.go:301] handling current node
	I1108 09:16:03.245594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:16:03.245629       1 main.go:301] handling current node
	
	
	==> kube-apiserver [69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc] <==
	I1108 09:14:18.931406       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1108 09:14:18.998396       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.35.246"}
	W1108 09:14:19.361418       1 logging.go:55] [core] [Channel #260 SubChannel #261]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:14:19.375949       1 logging.go:55] [core] [Channel #264 SubChannel #265]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1108 09:14:22.152981       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.38.9"}
	W1108 09:14:41.413293       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:14:41.427160       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:41.466338       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:41.481851       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:14:53.864764       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.865079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	W1108 09:14:53.868507       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.868546       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	W1108 09:14:53.968330       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.38.9:443: connect: connection refused
	E1108 09:14:53.968372       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.38.9:443: connect: connection refused" logger="UnhandledError"
	E1108 09:15:11.680357       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	W1108 09:15:11.680546       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:15:11.681446       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:15:11.681365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	E1108 09:15:11.686348       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.246.19:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.246.19:443: connect: connection refused" logger="UnhandledError"
	I1108 09:15:11.859523       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:16:06.197482       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42126: use of closed network connection
	
	
	==> kube-controller-manager [deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e] <==
	I1108 09:14:11.435607       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:14:11.435673       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:14:11.435868       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:14:11.435995       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:14:11.436182       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:14:11.436415       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:14:11.436449       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:14:11.436189       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:14:11.437612       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:14:11.437678       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:14:11.441937       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:14:11.443100       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:14:11.445356       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:14:11.446604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 09:14:17.685298       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 09:14:41.405794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:14:41.405950       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:14:41.405995       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:14:41.454414       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:14:41.458446       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:14:41.506426       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:14:41.558750       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:14:56.427476       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:15:11.511602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:15:11.568219       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854] <==
	I1108 09:14:13.412342       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:14:13.526484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:14:13.628991       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:14:13.629026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:14:13.629147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:14:13.689209       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:14:13.689263       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:14:13.699706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:14:13.699995       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:14:13.700019       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:14:13.705867       1 config.go:200] "Starting service config controller"
	I1108 09:14:13.705893       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:14:13.705923       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:14:13.705928       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:14:13.705941       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:14:13.705945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:14:13.706587       1 config.go:309] "Starting node config controller"
	I1108 09:14:13.706601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:14:13.706607       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:14:13.806398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:14:13.806441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:14:13.806469       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e] <==
	I1108 09:14:04.855168       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:14:06.157277       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:14:06.157382       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:14:06.157416       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:14:06.157470       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:14:06.185860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:14:06.186266       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:14:06.188514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:14:06.188547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:14:06.189467       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:14:06.189577       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:14:06.192097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 09:14:07.788638       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:15:25 addons-461635 kubelet[1278]: I1108 09:15:25.711870    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/cloud-spanner-emulator-6f9fcf858b-67xhk" podStartSLOduration=44.78391411 podStartE2EDuration="1m10.711849129s" podCreationTimestamp="2025-11-08 09:14:15 +0000 UTC" firstStartedPulling="2025-11-08 09:14:55.681237089 +0000 UTC m=+48.589948427" lastFinishedPulling="2025-11-08 09:15:21.609172018 +0000 UTC m=+74.517883446" observedRunningTime="2025-11-08 09:15:22.703739662 +0000 UTC m=+75.612451000" watchObservedRunningTime="2025-11-08 09:15:25.711849129 +0000 UTC m=+78.620560467"
	Nov 08 09:15:25 addons-461635 kubelet[1278]: E1108 09:15:25.883977    1278 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 09:15:25 addons-461635 kubelet[1278]: E1108 09:15:25.884144    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5041a3e3-5361-4b5f-bedc-7578fd1e27c8-gcr-creds podName:5041a3e3-5361-4b5f-bedc-7578fd1e27c8 nodeName:}" failed. No retries permitted until 2025-11-08 09:15:57.884054266 +0000 UTC m=+110.792765604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5041a3e3-5361-4b5f-bedc-7578fd1e27c8-gcr-creds") pod "registry-creds-764b6fb674-ch6rs" (UID: "5041a3e3-5361-4b5f-bedc-7578fd1e27c8") : secret "registry-creds-gcr" not found
	Nov 08 09:15:26 addons-461635 kubelet[1278]: I1108 09:15:26.700616    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fdnsr" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:15:34 addons-461635 kubelet[1278]: I1108 09:15:34.770832    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-ingress-dns-minikube" podStartSLOduration=40.65132435 podStartE2EDuration="1m18.770815914s" podCreationTimestamp="2025-11-08 09:14:16 +0000 UTC" firstStartedPulling="2025-11-08 09:14:55.891541236 +0000 UTC m=+48.800252582" lastFinishedPulling="2025-11-08 09:15:34.011032792 +0000 UTC m=+86.919744146" observedRunningTime="2025-11-08 09:15:34.770454827 +0000 UTC m=+87.679166173" watchObservedRunningTime="2025-11-08 09:15:34.770815914 +0000 UTC m=+87.679527260"
	Nov 08 09:15:34 addons-461635 kubelet[1278]: I1108 09:15:34.771612    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-fdnsr" podStartSLOduration=11.895865589 podStartE2EDuration="41.771601522s" podCreationTimestamp="2025-11-08 09:14:53 +0000 UTC" firstStartedPulling="2025-11-08 09:14:55.68597858 +0000 UTC m=+48.594689917" lastFinishedPulling="2025-11-08 09:15:25.561714422 +0000 UTC m=+78.470425850" observedRunningTime="2025-11-08 09:15:25.712584275 +0000 UTC m=+78.621295613" watchObservedRunningTime="2025-11-08 09:15:34.771601522 +0000 UTC m=+87.680312860"
	Nov 08 09:15:41 addons-461635 kubelet[1278]: I1108 09:15:41.055411    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-gvq8l" podStartSLOduration=50.315201179 podStartE2EDuration="1m19.05538923s" podCreationTimestamp="2025-11-08 09:14:22 +0000 UTC" firstStartedPulling="2025-11-08 09:15:09.902243642 +0000 UTC m=+62.810954980" lastFinishedPulling="2025-11-08 09:15:38.642431693 +0000 UTC m=+91.551143031" observedRunningTime="2025-11-08 09:15:38.778814848 +0000 UTC m=+91.687526202" watchObservedRunningTime="2025-11-08 09:15:41.05538923 +0000 UTC m=+93.964100568"
	Nov 08 09:15:41 addons-461635 kubelet[1278]: I1108 09:15:41.208619    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af08389-660a-4efa-8e8a-35fd083ed93f" path="/var/lib/kubelet/pods/0af08389-660a-4efa-8e8a-35fd083ed93f/volumes"
	Nov 08 09:15:45 addons-461635 kubelet[1278]: I1108 09:15:45.227178    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b270cd-75c9-45b1-9d93-e78a300e0569" path="/var/lib/kubelet/pods/94b270cd-75c9-45b1-9d93-e78a300e0569/volumes"
	Nov 08 09:15:45 addons-461635 kubelet[1278]: I1108 09:15:45.814404    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-sk8px" podStartSLOduration=52.927185784 podStartE2EDuration="1m27.814371172s" podCreationTimestamp="2025-11-08 09:14:18 +0000 UTC" firstStartedPulling="2025-11-08 09:15:09.945349963 +0000 UTC m=+62.854061301" lastFinishedPulling="2025-11-08 09:15:44.832535351 +0000 UTC m=+97.741246689" observedRunningTime="2025-11-08 09:15:45.813363112 +0000 UTC m=+98.722074532" watchObservedRunningTime="2025-11-08 09:15:45.814371172 +0000 UTC m=+98.723082510"
	Nov 08 09:15:51 addons-461635 kubelet[1278]: I1108 09:15:51.883381    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-tg2w5" podStartSLOduration=67.332892578 podStartE2EDuration="1m34.883352066s" podCreationTimestamp="2025-11-08 09:14:17 +0000 UTC" firstStartedPulling="2025-11-08 09:15:21.686982539 +0000 UTC m=+74.595693876" lastFinishedPulling="2025-11-08 09:15:49.237442026 +0000 UTC m=+102.146153364" observedRunningTime="2025-11-08 09:15:49.852493413 +0000 UTC m=+102.761204759" watchObservedRunningTime="2025-11-08 09:15:51.883352066 +0000 UTC m=+104.792063403"
	Nov 08 09:15:52 addons-461635 kubelet[1278]: I1108 09:15:52.396348    1278 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 08 09:15:52 addons-461635 kubelet[1278]: I1108 09:15:52.396404    1278 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 08 09:15:54 addons-461635 kubelet[1278]: I1108 09:15:54.885652    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-z6vwk" podStartSLOduration=1.6944408279999998 podStartE2EDuration="1m1.885634846s" podCreationTimestamp="2025-11-08 09:14:53 +0000 UTC" firstStartedPulling="2025-11-08 09:14:54.58290646 +0000 UTC m=+47.491617798" lastFinishedPulling="2025-11-08 09:15:54.77410047 +0000 UTC m=+107.682811816" observedRunningTime="2025-11-08 09:15:54.8854558 +0000 UTC m=+107.794167154" watchObservedRunningTime="2025-11-08 09:15:54.885634846 +0000 UTC m=+107.794346184"
	Nov 08 09:15:57 addons-461635 kubelet[1278]: E1108 09:15:57.898241    1278 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 09:15:57 addons-461635 kubelet[1278]: E1108 09:15:57.898328    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5041a3e3-5361-4b5f-bedc-7578fd1e27c8-gcr-creds podName:5041a3e3-5361-4b5f-bedc-7578fd1e27c8 nodeName:}" failed. No retries permitted until 2025-11-08 09:17:01.898308209 +0000 UTC m=+174.807019547 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5041a3e3-5361-4b5f-bedc-7578fd1e27c8-gcr-creds") pod "registry-creds-764b6fb674-ch6rs" (UID: "5041a3e3-5361-4b5f-bedc-7578fd1e27c8") : secret "registry-creds-gcr" not found
	Nov 08 09:15:58 addons-461635 kubelet[1278]: I1108 09:15:58.100124    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rllvr\" (UniqueName: \"kubernetes.io/projected/7823cd6a-4eb0-420d-b701-8acdbca2812c-kube-api-access-rllvr\") pod \"busybox\" (UID: \"7823cd6a-4eb0-420d-b701-8acdbca2812c\") " pod="default/busybox"
	Nov 08 09:15:58 addons-461635 kubelet[1278]: I1108 09:15:58.100185    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7823cd6a-4eb0-420d-b701-8acdbca2812c-gcp-creds\") pod \"busybox\" (UID: \"7823cd6a-4eb0-420d-b701-8acdbca2812c\") " pod="default/busybox"
	Nov 08 09:15:58 addons-461635 kubelet[1278]: W1108 09:15:58.310873    1278 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2c24103c57a646501be49a43c81d4d1f4b4db145515b044b4df596c98449e8c6/crio-61c2c0f589b6822535f05dcf0cc89878289374f15eb481f47dfc74c724411cc5 WatchSource:0}: Error finding container 61c2c0f589b6822535f05dcf0cc89878289374f15eb481f47dfc74c724411cc5: Status 404 returned error can't find the container with id 61c2c0f589b6822535f05dcf0cc89878289374f15eb481f47dfc74c724411cc5
	Nov 08 09:16:00 addons-461635 kubelet[1278]: I1108 09:16:00.889827    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.623525562 podStartE2EDuration="3.88980776s" podCreationTimestamp="2025-11-08 09:15:57 +0000 UTC" firstStartedPulling="2025-11-08 09:15:58.315221723 +0000 UTC m=+111.223933061" lastFinishedPulling="2025-11-08 09:16:00.581503921 +0000 UTC m=+113.490215259" observedRunningTime="2025-11-08 09:16:00.888257737 +0000 UTC m=+113.796969075" watchObservedRunningTime="2025-11-08 09:16:00.88980776 +0000 UTC m=+113.798519098"
	Nov 08 09:16:07 addons-461635 kubelet[1278]: I1108 09:16:07.222075    1278 scope.go:117] "RemoveContainer" containerID="9db9d9205062ec5e1c33ded43526e58f6b65e1479c464746b1e7e7dd448e2491"
	Nov 08 09:16:07 addons-461635 kubelet[1278]: I1108 09:16:07.238584    1278 scope.go:117] "RemoveContainer" containerID="7ceca45a39e2af70af4b68bddd8b579a428fe6c593074b591946020792bde820"
	Nov 08 09:16:07 addons-461635 kubelet[1278]: E1108 09:16:07.360785    1278 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/81d91819bab58aa801b0f3e4edbe0b4803e352fb3b31697b70f6de6d8ffb828e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/81d91819bab58aa801b0f3e4edbe0b4803e352fb3b31697b70f6de6d8ffb828e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 08 09:16:07 addons-461635 kubelet[1278]: E1108 09:16:07.372024    1278 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d1a3d16e753c9a67bf1ac4137ecdcf23bb51fb2d997b5d11454143752668995c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d1a3d16e753c9a67bf1ac4137ecdcf23bb51fb2d997b5d11454143752668995c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-jjtjm_94b270cd-75c9-45b1-9d93-e78a300e0569/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-jjtjm_94b270cd-75c9-45b1-9d93-e78a300e0569/patch/1.log: no such file or directory
	Nov 08 09:16:07 addons-461635 kubelet[1278]: E1108 09:16:07.386525    1278 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/22aecec7c7799e7ae4cf2935d12657981233d4a3a88c512d2ab93ae577576f30/diff" to get inode usage: stat /var/lib/containers/storage/overlay/22aecec7c7799e7ae4cf2935d12657981233d4a3a88c512d2ab93ae577576f30/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2] <==
	W1108 09:15:43.166573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:45.221921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:45.241370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:47.244701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:47.249877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:49.253221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:49.257887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:51.264800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:51.274736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:53.278874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:53.284055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:55.288383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:55.295598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:57.298898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:57.303272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:59.306904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:15:59.311485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:01.315341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:01.320266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:03.322997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:03.330952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:05.334889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:05.339213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:07.343436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:07.355240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-461635 -n addons-461635
helpers_test.go:269: (dbg) Run:  kubectl --context addons-461635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs: exit status 1 (82.894178ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ld89t" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f9wtz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ch6rs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-461635 describe pod ingress-nginx-admission-create-ld89t ingress-nginx-admission-patch-f9wtz registry-creds-764b6fb674-ch6rs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable headlamp --alsologtostderr -v=1: exit status 11 (268.781433ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:09.760148  301416 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:09.761146  301416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:09.761162  301416 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:09.761168  301416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:09.761409  301416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:09.761691  301416 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:09.762048  301416 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:09.762065  301416 addons.go:607] checking whether the cluster is paused
	I1108 09:16:09.762167  301416 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:09.762190  301416 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:09.762673  301416 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:09.781473  301416 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:09.781532  301416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:09.799503  301416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:09.909101  301416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:09.909181  301416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:09.942421  301416 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:09.942440  301416 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:09.942444  301416 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:09.942448  301416 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:09.942452  301416 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:09.942456  301416 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:09.942459  301416 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:09.942462  301416 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:09.942465  301416 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:09.942471  301416 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:09.942474  301416 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:09.942477  301416 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:09.942480  301416 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:09.942483  301416 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:09.942487  301416 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:09.942491  301416 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:09.942494  301416 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:09.942499  301416 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:09.942502  301416 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:09.942505  301416 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:09.942511  301416 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:09.942518  301416 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:09.942521  301416 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:09.942524  301416 cri.go:89] found id: ""
	I1108 09:16:09.942587  301416 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:09.957517  301416 out.go:203] 
	W1108 09:16:09.960403  301416 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:09.960437  301416 out.go:285] * 
	* 
	W1108 09:16:09.966941  301416 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:09.969824  301416 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-67xhk" [83215d26-cbd6-46d7-a88b-606a4566c9fe] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003376467s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.490522ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:28.602304  301913 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:28.603147  301913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:28.603163  301913 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:28.603169  301913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:28.603431  301913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:28.603753  301913 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:28.604122  301913 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:28.604141  301913 addons.go:607] checking whether the cluster is paused
	I1108 09:16:28.604244  301913 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:28.604261  301913 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:28.604724  301913 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:28.623952  301913 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:28.624005  301913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:28.641645  301913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:28.747535  301913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:28.747637  301913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:28.778409  301913 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:28.778436  301913 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:28.778442  301913 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:28.778446  301913 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:28.778449  301913 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:28.778452  301913 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:28.778456  301913 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:28.778459  301913 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:28.778463  301913 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:28.778474  301913 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:28.778478  301913 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:28.778482  301913 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:28.778485  301913 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:28.778489  301913 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:28.778493  301913 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:28.778503  301913 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:28.778506  301913 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:28.778515  301913 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:28.778519  301913 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:28.778522  301913 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:28.778527  301913 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:28.778533  301913 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:28.778536  301913 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:28.778539  301913 cri.go:89] found id: ""
	I1108 09:16:28.778598  301913 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:28.793570  301913 out.go:203] 
	W1108 09:16:28.796465  301913 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:28.796554  301913 out.go:285] * 
	* 
	W1108 09:16:28.803079  301913 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:28.806130  301913 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-461635 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-461635 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c49470f9-613e-4630-9e5d-6b44da8ec6aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c49470f9-613e-4630-9e5d-6b44da8ec6aa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c49470f9-613e-4630-9e5d-6b44da8ec6aa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003964895s
addons_test.go:967: (dbg) Run:  kubectl --context addons-461635 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 ssh "cat /opt/local-path-provisioner/pvc-aed32540-d952-4f4f-87bc-ef0c1030256d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-461635 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-461635 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (259.503462ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:30.740646  302051 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:30.741508  302051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:30.741550  302051 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:30.741572  302051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:30.741866  302051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:30.742195  302051 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:30.742592  302051 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:30.742634  302051 addons.go:607] checking whether the cluster is paused
	I1108 09:16:30.742758  302051 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:30.742791  302051 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:30.743316  302051 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:30.759595  302051 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:30.759654  302051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:30.777767  302051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:30.887360  302051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:30.887488  302051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:30.916399  302051 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:30.916462  302051 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:30.916481  302051 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:30.916501  302051 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:30.916520  302051 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:30.916539  302051 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:30.916558  302051 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:30.916578  302051 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:30.916597  302051 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:30.916618  302051 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:30.916637  302051 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:30.916656  302051 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:30.916690  302051 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:30.916712  302051 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:30.916731  302051 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:30.916760  302051 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:30.916796  302051 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:30.916818  302051 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:30.916837  302051 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:30.916852  302051 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:30.916873  302051 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:30.916890  302051 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:30.916922  302051 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:30.916940  302051 cri.go:89] found id: ""
	I1108 09:16:30.917013  302051 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:30.932987  302051 out.go:203] 
	W1108 09:16:30.935961  302051 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:30.935985  302051 out.go:285] * 
	* 
	W1108 09:16:30.942521  302051 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:30.945545  302051 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fdnsr" [8ae582b0-dab8-4517-ad8c-004b79d85bd0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002786001s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.5712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:22.285024  301607 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:22.285996  301607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:22.286013  301607 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:22.286020  301607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:22.286385  301607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:22.286719  301607 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:22.287164  301607 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:22.287185  301607 addons.go:607] checking whether the cluster is paused
	I1108 09:16:22.287336  301607 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:22.287355  301607 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:22.287858  301607 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:22.306627  301607 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:22.306697  301607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:22.330044  301607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:22.435697  301607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:22.435814  301607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:22.465177  301607 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:22.465196  301607 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:22.465201  301607 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:22.465205  301607 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:22.465208  301607 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:22.465212  301607 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:22.465215  301607 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:22.465222  301607 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:22.465225  301607 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:22.465232  301607 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:22.465235  301607 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:22.465238  301607 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:22.465241  301607 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:22.465244  301607 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:22.465247  301607 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:22.465252  301607 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:22.465255  301607 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:22.465259  301607 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:22.465262  301607 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:22.465295  301607 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:22.465303  301607 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:22.465307  301607 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:22.465311  301607 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:22.465314  301607 cri.go:89] found id: ""
	I1108 09:16:22.465362  301607 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:22.481270  301607 out.go:203] 
	W1108 09:16:22.484239  301607 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:22.484263  301607 out.go:285] * 
	* 
	W1108 09:16:22.490717  301607 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:22.493827  301607 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jdt2n" [17fb7928-7eea-4926-a981-9fc4167303c0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00415326s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-461635 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-461635 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.774022ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:16:16.030431  301482 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:16.031393  301482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:16.031445  301482 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:16.031469  301482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:16.031751  301482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:16:16.032097  301482 mustload.go:66] Loading cluster: addons-461635
	I1108 09:16:16.032520  301482 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:16.032565  301482 addons.go:607] checking whether the cluster is paused
	I1108 09:16:16.032694  301482 config.go:182] Loaded profile config "addons-461635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:16.032730  301482 host.go:66] Checking if "addons-461635" exists ...
	I1108 09:16:16.033261  301482 cli_runner.go:164] Run: docker container inspect addons-461635 --format={{.State.Status}}
	I1108 09:16:16.053500  301482 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:16.053556  301482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-461635
	I1108 09:16:16.071161  301482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/addons-461635/id_rsa Username:docker}
	I1108 09:16:16.175686  301482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:16.175790  301482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:16.203507  301482 cri.go:89] found id: "cec2fa3f2818c7d2cbe831049aa420ab3a29c014ca33f8ac5e2bc5367556b42b"
	I1108 09:16:16.203527  301482 cri.go:89] found id: "e75bb914088f301f945e543de7f90eaf1a2151babedfcad3cf3145889916b22d"
	I1108 09:16:16.203532  301482 cri.go:89] found id: "b7edd2dbe2ee21da18cfec083e951bd1613c96a078149a910440b410c1a8966b"
	I1108 09:16:16.203535  301482 cri.go:89] found id: "0f65082d20771f8f4577e0616a227d13d6a769267508d0763dbf24aa122faa26"
	I1108 09:16:16.203539  301482 cri.go:89] found id: "4a60c053ded9b8b339984294ad4c96e8cbfa9c380ef1659160310a3f5336dd2d"
	I1108 09:16:16.203543  301482 cri.go:89] found id: "7adfc46b5e895e48be558f4780546932e8939e2426cec8495c21b112d7910094"
	I1108 09:16:16.203546  301482 cri.go:89] found id: "4eaa15ebd9dc0018ac51f5eb25cca48d5abae8302ba133b002e676a1322d98b9"
	I1108 09:16:16.203550  301482 cri.go:89] found id: "f6ad305097e588ce18464d8479c6df9d55e15f115bc88494648d05a9cda6b3d5"
	I1108 09:16:16.203553  301482 cri.go:89] found id: "e243ab620e43b997e5e5d6812030e1772d94d78fe9d9f30e5d3db2c9fc2c77f5"
	I1108 09:16:16.203564  301482 cri.go:89] found id: "9564a18f3ee6e820247efc133d2992e2dcad77cc5f7e4a8d3285a144c365d90b"
	I1108 09:16:16.203571  301482 cri.go:89] found id: "6e683377d4d46a229174cd30d8fe77297bb4f75dc62329df075c701f54ab358c"
	I1108 09:16:16.203574  301482 cri.go:89] found id: "f06d60de079261cdc4ca1377c63a102c1d6bb8791c77d55905784664edc3138f"
	I1108 09:16:16.203577  301482 cri.go:89] found id: "bc405f71c0f23df96dd765bf7946355394ee602438700c8f7bdaaba5fe5291e8"
	I1108 09:16:16.203581  301482 cri.go:89] found id: "ec999739a0e69824f88b5279c2eff6d58e07c1fb8fcf3cc0d83983c3ee662f12"
	I1108 09:16:16.203584  301482 cri.go:89] found id: "735a5e20ff11fbc4e5ba222d06846403cb0b522b22903ac6a2b0b5ed60239e39"
	I1108 09:16:16.203593  301482 cri.go:89] found id: "bc2c816611accd36c85c392a57b21ca2b53c955b5a060a8785bd95f2df6a47f2"
	I1108 09:16:16.203604  301482 cri.go:89] found id: "915d95faab44de101693cd7ee5daf9ea68a901ae0cbc2ef281ffc92f23be1dca"
	I1108 09:16:16.203610  301482 cri.go:89] found id: "d901d07c825884e49e03bf862567e79385d2ff185a4ac6b431ce1792b7f59854"
	I1108 09:16:16.203614  301482 cri.go:89] found id: "537bc857d2209347cc81efbcfeda34a1608f32f0b34ad20768056fb4b2b2d806"
	I1108 09:16:16.203617  301482 cri.go:89] found id: "2eaec8104c429ec28a8447a909da59e2b7c62ec1dbfc90fdccb6504f5137ca7e"
	I1108 09:16:16.203622  301482 cri.go:89] found id: "deacd1133d37946da2b5883b78f6d979eb862e300e39bbe6c189dd13122a168e"
	I1108 09:16:16.203625  301482 cri.go:89] found id: "69e30ea1fe4ff21597b96fb937a012611ede53895793a156ed6a09e7a34f3efc"
	I1108 09:16:16.203628  301482 cri.go:89] found id: "79cffc504693676e00f1205fc5fd69248da2becd41be4db515df355a945e8c4a"
	I1108 09:16:16.203632  301482 cri.go:89] found id: ""
	I1108 09:16:16.203684  301482 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:16:16.218637  301482 out.go:203] 
	W1108 09:16:16.221644  301482 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:16:16.221668  301482 out.go:285] * 
	* 
	W1108 09:16:16.228042  301482 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:16:16.231049  301482 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-461635 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-356848 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-356848 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-r5z5q" [a3b0889b-fc92-4e56-a6ad-aa99d85725a5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1108 09:25:58.068846  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:26:25.776791  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:30:58.068161  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-356848 -n functional-356848
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-08 09:34:00.745523005 +0000 UTC m=+1267.856448909
functional_test.go:1645: (dbg) Run:  kubectl --context functional-356848 describe po hello-node-connect-7d85dfc575-r5z5q -n default
functional_test.go:1645: (dbg) kubectl --context functional-356848 describe po hello-node-connect-7d85dfc575-r5z5q -n default:
Name:             hello-node-connect-7d85dfc575-r5z5q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-356848/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:23:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh5pk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh5pk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r5z5q to functional-356848
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-356848 logs hello-node-connect-7d85dfc575-r5z5q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-356848 logs hello-node-connect-7d85dfc575-r5z5q -n default: exit status 1 (95.679265ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r5z5q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-356848 logs hello-node-connect-7d85dfc575-r5z5q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-356848 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-r5z5q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-356848/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:23:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh5pk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh5pk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r5z5q to functional-356848
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-356848 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-356848 logs -l app=hello-node-connect: exit status 1 (92.357445ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r5z5q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-356848 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-356848 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.21.14
IPs:                      10.104.21.14
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30507/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-356848
helpers_test.go:243: (dbg) docker inspect functional-356848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16",
	        "Created": "2025-11-08T09:20:17.072310456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309628,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:20:17.148753611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16/hostname",
	        "HostsPath": "/var/lib/docker/containers/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16/hosts",
	        "LogPath": "/var/lib/docker/containers/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16-json.log",
	        "Name": "/functional-356848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-356848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-356848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16",
	                "LowerDir": "/var/lib/docker/overlay2/ff1daab65055b460fe74706ed683373efb23f46a9e1592e3e8a6771bcf602c07-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff1daab65055b460fe74706ed683373efb23f46a9e1592e3e8a6771bcf602c07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff1daab65055b460fe74706ed683373efb23f46a9e1592e3e8a6771bcf602c07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff1daab65055b460fe74706ed683373efb23f46a9e1592e3e8a6771bcf602c07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-356848",
	                "Source": "/var/lib/docker/volumes/functional-356848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-356848",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-356848",
	                "name.minikube.sigs.k8s.io": "functional-356848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2d4dcf514f27bfb44a4e9a1e7d32bee60a7aaca2737558d428180e42ae92cbd",
	            "SandboxKey": "/var/run/docker/netns/e2d4dcf514f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-356848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:f6:cc:c5:26:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95a735178b2361ffcded4cae6ea3a953fbc7a5740cf1c8ef57f3e5be8d960d88",
	                    "EndpointID": "124b4fcc40084804d65c2e026e48de8aafeec850d23e117a6a3f4baac1fb0096",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-356848",
	                        "305eee776cb8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-356848 -n functional-356848
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 logs -n 25: (1.517384473s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-356848 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh            │ functional-356848 ssh -- ls -la /mount-9p                                                                          │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh            │ functional-356848 ssh sudo umount -f /mount-9p                                                                     │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ mount          │ -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount3 --alsologtostderr -v=1 │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh            │ functional-356848 ssh findmnt -T /mount1                                                                           │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ mount          │ -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount1 --alsologtostderr -v=1 │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ mount          │ -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount2 --alsologtostderr -v=1 │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh            │ functional-356848 ssh findmnt -T /mount1                                                                           │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh            │ functional-356848 ssh findmnt -T /mount2                                                                           │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh            │ functional-356848 ssh findmnt -T /mount3                                                                           │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ mount          │ -p functional-356848 --kill=true                                                                                   │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ start          │ -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ start          │ -p functional-356848 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ start          │ -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-356848 --alsologtostderr -v=1                                                     │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ update-context │ functional-356848 update-context --alsologtostderr -v=2                                                            │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ update-context │ functional-356848 update-context --alsologtostderr -v=2                                                            │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ update-context │ functional-356848 update-context --alsologtostderr -v=2                                                            │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ image          │ functional-356848 image ls --format short --alsologtostderr                                                        │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ image          │ functional-356848 image ls --format yaml --alsologtostderr                                                         │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh            │ functional-356848 ssh pgrep buildkitd                                                                              │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ image          │ functional-356848 image build -t localhost/my-image:functional-356848 testdata/build --alsologtostderr             │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ image          │ functional-356848 image ls                                                                                         │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:34 UTC │
	│ image          │ functional-356848 image ls --format json --alsologtostderr                                                         │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:34 UTC │ 08 Nov 25 09:34 UTC │
	│ image          │ functional-356848 image ls --format table --alsologtostderr                                                        │ functional-356848 │ jenkins │ v1.37.0 │ 08 Nov 25 09:34 UTC │ 08 Nov 25 09:34 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:43.943533  321506 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:43.943714  321506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.943745  321506 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:43.943766  321506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.944161  321506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:33:43.944578  321506 out.go:368] Setting JSON to false
	I1108 09:33:43.945593  321506 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8173,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:33:43.945701  321506 start.go:143] virtualization:  
	I1108 09:33:43.948793  321506 out.go:179] * [functional-356848] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1108 09:33:43.952628  321506 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:33:43.952634  321506 notify.go:221] Checking for updates...
	I1108 09:33:43.958484  321506 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:43.961210  321506 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:33:43.964088  321506 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:33:43.966904  321506 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:33:43.969712  321506 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:43.973115  321506 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:43.973724  321506 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:44.005593  321506 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:44.005714  321506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:44.077368  321506 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 09:33:44.066036982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:44.077499  321506 docker.go:319] overlay module found
	I1108 09:33:44.080722  321506 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1108 09:33:44.083682  321506 start.go:309] selected driver: docker
	I1108 09:33:44.083705  321506 start.go:930] validating driver "docker" against &{Name:functional-356848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-356848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:44.083905  321506 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:44.088382  321506 out.go:203] 
	W1108 09:33:44.091424  321506 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 09:33:44.094304  321506 out.go:203] 
	
	
	==> CRI-O <==
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.281671101Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=44dfcb0c-45b7-450e-9c31-9c5c9e740b46 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.283603891Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5af4e14f-6296-4075-be20-98bf076f7d62 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.285987763Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e73661fe-f734-4d2a-87be-eda7738845e4 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.287832306Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.293741274Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj/kubernetes-dashboard" id=11075cd1-43b8-4af1-b868-b2f7556eab89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.29387383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.299095938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.299300552Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9a564e03d8f1e88d7cadcd7ec8681dc6e037774333f39b0a2b368f3c385b9d84/merged/etc/group: no such file or directory"
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.299681422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.314569417Z" level=info msg="Created container 2a2ae5e3ea5402062e92bfa1b947368e5ecad6d3770ffc56c5264e2342a17017: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj/kubernetes-dashboard" id=11075cd1-43b8-4af1-b868-b2f7556eab89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.315790699Z" level=info msg="Starting container: 2a2ae5e3ea5402062e92bfa1b947368e5ecad6d3770ffc56c5264e2342a17017" id=9a24f2a7-8f81-4e86-8b29-23382abdeb7f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.31861424Z" level=info msg="Started container" PID=6929 containerID=2a2ae5e3ea5402062e92bfa1b947368e5ecad6d3770ffc56c5264e2342a17017 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj/kubernetes-dashboard id=9a24f2a7-8f81-4e86-8b29-23382abdeb7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=89f84fd68fd6fddba5aa0b75057535cd573e738ac477223e36992d857439569d
	Nov 08 09:33:50 functional-356848 crio[3549]: time="2025-11-08T09:33:50.568605738Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.511204992Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=e73661fe-f734-4d2a-87be-eda7738845e4 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.512263113Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=5c44901b-4c09-400a-b9c0-d1bfa8c83e13 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.514484202Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=53e4e4ea-b380-415b-8109-32a8430a1e7a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.520792617Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-98cmp/dashboard-metrics-scraper" id=ad031c26-286a-4f81-8301-d275d1bd441a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.520960003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.526218231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.526412876Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/31066dd080e927cce95741ce6df25a3df667fbc7d881d07f913f8f8db068eabb/merged/etc/group: no such file or directory"
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.526740789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.541451953Z" level=info msg="Created container df95e47eb5c45a41c3686761d5f4a13ed81a21e6613052d8d6f5d2b5eebd7264: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-98cmp/dashboard-metrics-scraper" id=ad031c26-286a-4f81-8301-d275d1bd441a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.542583741Z" level=info msg="Starting container: df95e47eb5c45a41c3686761d5f4a13ed81a21e6613052d8d6f5d2b5eebd7264" id=cc789706-ce8d-4edd-a364-154c6733ba97 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:33:51 functional-356848 crio[3549]: time="2025-11-08T09:33:51.544756608Z" level=info msg="Started container" PID=6970 containerID=df95e47eb5c45a41c3686761d5f4a13ed81a21e6613052d8d6f5d2b5eebd7264 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-98cmp/dashboard-metrics-scraper id=cc789706-ce8d-4edd-a364-154c6733ba97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=020e9e63367cc958551a013667e06bf9d30a26e48b8aa7e85a346e2e18081150
	Nov 08 09:34:01 functional-356848 crio[3549]: time="2025-11-08T09:34:01.084632928Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cbf7b864-e3b4-48a0-b452-01d8717ff1d7 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	df95e47eb5c45       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   10 seconds ago      Running             dashboard-metrics-scraper   0                   020e9e63367cc       dashboard-metrics-scraper-77bf4d6c4c-98cmp   kubernetes-dashboard
	2a2ae5e3ea540       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         12 seconds ago      Running             kubernetes-dashboard        0                   89f84fd68fd6f       kubernetes-dashboard-855c9754f9-m2xtj        kubernetes-dashboard
	498b5aad32cac       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              26 seconds ago      Exited              mount-munger                0                   4d308e2074561       busybox-mount                                default
	71eba9c3190ce       docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33                  10 minutes ago      Running             myfrontend                  0                   b7312eac46786       sp-pod                                       default
	37a0b2188e86a       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                  10 minutes ago      Running             nginx                       0                   7f42e3c307dfd       nginx-svc                                    default
	a65624ad274b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 3                   7d1bbe785bac2       kindnet-fscw2                                kube-system
	00fbfa5ce5837       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  3                   5b49b7381ec6c       kube-proxy-wqkgq                             kube-system
	f9de3564fb877       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   2ff828a8a3712       kube-apiserver-functional-356848             kube-system
	c71e54f79e08d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     3                   4597e662569c9       kube-controller-manager-functional-356848    kube-system
	6ed76ae21dde7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              3                   c477c4a79b1b3       kube-scheduler-functional-356848             kube-system
	b4208c69f26cc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        3                   cda2e03655a42       etcd-functional-356848                       kube-system
	381d08e6ac5df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   ac06c8d3f6bf6       storage-provisioner                          kube-system
	a2fb38848a1d3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   915d582c0b990       coredns-66bc5c9577-vtfgd                     kube-system
	edab2e7203cf9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  2                   5b49b7381ec6c       kube-proxy-wqkgq                             kube-system
	a3815571d7407       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              2                   c477c4a79b1b3       kube-scheduler-functional-356848             kube-system
	29083c1c55d9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     2                   4597e662569c9       kube-controller-manager-functional-356848    kube-system
	e09fe9bcd8037       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 2                   7d1bbe785bac2       kindnet-fscw2                                kube-system
	8d1abf247f77f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        2                   cda2e03655a42       etcd-functional-356848                       kube-system
	ad5376b353cdd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 12 minutes ago      Exited              storage-provisioner         2                   ac06c8d3f6bf6       storage-provisioner                          kube-system
	86fb0c33d6609       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 12 minutes ago      Exited              coredns                     1                   915d582c0b990       coredns-66bc5c9577-vtfgd                     kube-system
	
	
	==> coredns [86fb0c33d6609a312eac6495b4c9aa962df3419c03df4e80b578fba8b6c061c8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45339 - 55225 "HINFO IN 9178548376277826220.413124530912032707. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.05655763s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a2fb38848a1d367fa2acd0e4a0ec178b27982f6e46537855e35fd2165baeeec1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54028 - 19022 "HINFO IN 8435669991785985619.5038469482780048758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004367918s
	
	
	==> describe nodes <==
	Name:               functional-356848
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-356848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=functional-356848
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_20_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:20:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-356848
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:34:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:34:01 +0000   Sat, 08 Nov 2025 09:20:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:34:01 +0000   Sat, 08 Nov 2025 09:20:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:34:01 +0000   Sat, 08 Nov 2025 09:20:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:34:01 +0000   Sat, 08 Nov 2025 09:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-356848
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                9d6c887a-72af-4018-aba7-971a61a734b2
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-f6xs9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-r5z5q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-vtfgd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-functional-356848                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-fscw2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-356848              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-356848     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wqkgq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-356848              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-98cmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m2xtj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-356848 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-356848 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-356848 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-356848 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-356848 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-356848 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                node-controller  Node functional-356848 event: Registered Node functional-356848 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-356848 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-356848 event: Registered Node functional-356848 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-356848 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-356848 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-356848 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-356848 event: Registered Node functional-356848 in Controller
	
	
	==> dmesg <==
	[Nov 8 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014865] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.528312] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034771] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.823038] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.933277] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 8 08:21] hrtimer: interrupt took 14263725 ns
	[Nov 8 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[  +0.129013] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 8 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d1abf247f77f2cc87a8115ed98cf088bfafe5b72f0c12c241dcc81974ba8e9b] <==
	{"level":"warn","ts":"2025-11-08T09:22:28.966105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:28.987837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:29.007693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:29.037047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:29.051051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:29.066947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:29.126290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:22:30.306956Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T09:22:30.307003Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-356848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-08T09:22:30.307120Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:22:37.309562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:22:37.309731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:22:37.309811Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-08T09:22:37.309939Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-08T09:22:37.309985Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T09:22:37.311306Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:22:37.311402Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:22:37.311438Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-08T09:22:37.311550Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:22:37.311604Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:22:37.311637Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:22:37.316479Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-08T09:22:37.316558Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:22:37.316588Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-08T09:22:37.316612Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-356848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b4208c69f26cc484bfb3070dea67235f2be5e6d03bba484a3613185ed6e577fb] <==
	{"level":"warn","ts":"2025-11-08T09:22:56.483631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.499469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.519469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.535614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.558613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.594086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.614350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.650379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.662251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.677024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.699792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.718445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.734999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.753055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.773739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.784356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.800179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.816998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.855517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.901712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.918507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:22:56.989229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:32:55.333425Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1139}
	{"level":"info","ts":"2025-11-08T09:32:55.357588Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1139,"took":"23.828678ms","hash":4055030507,"current-db-size-bytes":3346432,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1425408,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-08T09:32:55.357658Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4055030507,"revision":1139,"compact-revision":-1}
	
	
	==> kernel <==
	 09:34:02 up  2:16,  0 user,  load average: 1.01, 0.51, 1.36
	Linux functional-356848 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a65624ad274b1261a6f2debd4db0e3e6aee4840ff1bff9106adb10c4b7e31217] <==
	I1108 09:31:58.729116       1 main.go:301] handling current node
	I1108 09:32:08.720978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:08.721084       1 main.go:301] handling current node
	I1108 09:32:18.725672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:18.725707       1 main.go:301] handling current node
	I1108 09:32:28.729485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:28.729520       1 main.go:301] handling current node
	I1108 09:32:38.722236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:38.722276       1 main.go:301] handling current node
	I1108 09:32:48.725007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:48.725043       1 main.go:301] handling current node
	I1108 09:32:58.729082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:32:58.729191       1 main.go:301] handling current node
	I1108 09:33:08.721004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:08.721062       1 main.go:301] handling current node
	I1108 09:33:18.725023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:18.725059       1 main.go:301] handling current node
	I1108 09:33:28.729044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:28.729074       1 main.go:301] handling current node
	I1108 09:33:38.721215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:38.721247       1 main.go:301] handling current node
	I1108 09:33:48.721029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:48.721061       1 main.go:301] handling current node
	I1108 09:33:58.725347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:33:58.725446       1 main.go:301] handling current node
	
	
	==> kindnet [e09fe9bcd803790768cb34a15d5a98578ea656df6e88578b3ffd91ccdda2a9a1] <==
	I1108 09:22:26.055191       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:22:26.062083       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1108 09:22:26.062256       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:22:26.062271       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:22:26.062283       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:22:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:22:26.314392       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:22:26.314469       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:22:26.314504       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:22:26.315301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:22:29.815584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:22:29.815626       1 metrics.go:72] Registering metrics
	I1108 09:22:29.815691       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [f9de3564fb877631c0dcd8172d29514e7c841909cf25d1640120ca816249dda5] <==
	I1108 09:22:57.887178       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:22:57.893779       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:22:57.893980       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:22:57.894256       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:22:57.894403       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:22:57.894881       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:22:57.894963       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:22:58.161342       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:22:58.598003       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:22:59.626087       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:22:59.746464       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:22:59.816105       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:22:59.826163       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:23:08.380783       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:23:08.395131       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:23:08.398168       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:23:13.817973       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.249.243"}
	I1108 09:23:23.353062       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.135.226"}
	I1108 09:23:27.028464       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.148.18"}
	E1108 09:23:59.726368       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60224: use of closed network connection
	I1108 09:24:00.157786       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.21.14"}
	I1108 09:32:57.807971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:33:45.151610       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:33:45.559245       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.41.132"}
	I1108 09:33:45.578965       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.130.92"}
	
	
	==> kube-controller-manager [29083c1c55d9abb9fe2a7131a118a6d8d606b7ae22991780b3d2560cf5524667] <==
	I1108 09:22:26.753097       1 serving.go:386] Generated self-signed cert in-memory
	I1108 09:22:27.337692       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1108 09:22:27.337795       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:22:27.339788       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1108 09:22:27.340524       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 09:22:27.340739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:22:27.340846       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [c71e54f79e08d158ff0f40268953525579795708473e4177830f4a145ee81546] <==
	I1108 09:23:00.609183       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-356848"
	I1108 09:23:00.609220       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:23:00.612960       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:23:00.618385       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:23:00.622271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:23:00.628459       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:23:00.630500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:23:00.630632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:23:00.630686       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:23:00.630754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:23:00.630866       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:23:00.637072       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:23:00.656713       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:23:00.680504       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:23:00.680540       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:23:00.680547       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:23:08.417747       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1108 09:33:45.317357       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.361264       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.394462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.408651       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.424676       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.432097       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.432234       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:33:45.439201       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [00fbfa5ce583783f76b3e415f549e55a60828e4fc92e9b0e93ad830922963ad1] <==
	I1108 09:22:58.476791       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:22:58.554549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:22:58.655117       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:22:58.655158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:22:58.655222       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:22:58.684877       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:22:58.685030       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:22:58.689788       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:22:58.690153       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:22:58.690384       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:22:58.691658       1 config.go:200] "Starting service config controller"
	I1108 09:22:58.691731       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:22:58.691784       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:22:58.691833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:22:58.691869       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:22:58.691896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:22:58.692512       1 config.go:309] "Starting node config controller"
	I1108 09:22:58.692573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:22:58.692602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:22:58.792535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:22:58.792640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:22:58.792662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [edab2e7203cf9f91917113b11b32604ce8999ccd8bcdad065cb67b3acfbad71a] <==
	
	
	==> kube-scheduler [6ed76ae21dde7b80834fd05fc04deefd95c3d69423fbf7cb8334121e9014c2fd] <==
	I1108 09:22:53.920449       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:22:57.712265       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:22:57.712369       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:22:57.712404       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:22:57.712445       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:22:57.806877       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:22:57.806977       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:22:57.820805       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:22:57.825061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:57.825176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:57.820958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:22:57.926222       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [a3815571d7407dff55c5655950f067bb402bd79bee5f44ecdd64e952a07e8014] <==
	I1108 09:22:27.591356       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:22:29.703965       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:22:29.704056       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:22:29.704089       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:22:29.704138       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:22:29.956219       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:22:29.956302       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1108 09:22:29.956374       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1108 09:22:29.958620       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:29.958691       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:29.958995       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1108 09:22:29.959060       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1108 09:22:29.959096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:22:29.959130       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:29.959140       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:22:29.959195       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 09:22:29.959209       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 09:22:29.959215       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 09:22:29.959239       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 08 09:33:25 functional-356848 kubelet[4046]: E1108 09:33:25.084409    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6xs9" podUID="a4338934-a5f4-4216-ac2f-5d5494bb9632"
	Nov 08 09:33:29 functional-356848 kubelet[4046]: E1108 09:33:29.084295    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r5z5q" podUID="a3b0889b-fc92-4e56-a6ad-aa99d85725a5"
	Nov 08 09:33:33 functional-356848 kubelet[4046]: I1108 09:33:33.449226    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b4556f52-1746-4720-a4b9-cc36c5d38dc8-test-volume\") pod \"busybox-mount\" (UID: \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\") " pod="default/busybox-mount"
	Nov 08 09:33:33 functional-356848 kubelet[4046]: I1108 09:33:33.449808    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmgn4\" (UniqueName: \"kubernetes.io/projected/b4556f52-1746-4720-a4b9-cc36c5d38dc8-kube-api-access-xmgn4\") pod \"busybox-mount\" (UID: \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\") " pod="default/busybox-mount"
	Nov 08 09:33:37 functional-356848 kubelet[4046]: E1108 09:33:37.083723    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6xs9" podUID="a4338934-a5f4-4216-ac2f-5d5494bb9632"
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.086657    4046 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b4556f52-1746-4720-a4b9-cc36c5d38dc8-test-volume\") pod \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\" (UID: \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\") "
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.086719    4046 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmgn4\" (UniqueName: \"kubernetes.io/projected/b4556f52-1746-4720-a4b9-cc36c5d38dc8-kube-api-access-xmgn4\") pod \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\" (UID: \"b4556f52-1746-4720-a4b9-cc36c5d38dc8\") "
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.087193    4046 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4556f52-1746-4720-a4b9-cc36c5d38dc8-test-volume" (OuterVolumeSpecName: "test-volume") pod "b4556f52-1746-4720-a4b9-cc36c5d38dc8" (UID: "b4556f52-1746-4720-a4b9-cc36c5d38dc8"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.091099    4046 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4556f52-1746-4720-a4b9-cc36c5d38dc8-kube-api-access-xmgn4" (OuterVolumeSpecName: "kube-api-access-xmgn4") pod "b4556f52-1746-4720-a4b9-cc36c5d38dc8" (UID: "b4556f52-1746-4720-a4b9-cc36c5d38dc8"). InnerVolumeSpecName "kube-api-access-xmgn4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.187477    4046 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b4556f52-1746-4720-a4b9-cc36c5d38dc8-test-volume\") on node \"functional-356848\" DevicePath \"\""
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.187717    4046 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmgn4\" (UniqueName: \"kubernetes.io/projected/b4556f52-1746-4720-a4b9-cc36c5d38dc8-kube-api-access-xmgn4\") on node \"functional-356848\" DevicePath \"\""
	Nov 08 09:33:38 functional-356848 kubelet[4046]: I1108 09:33:38.895095    4046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d308e207456176b0ddd0884c899b09748feacba4b7d849fc47a40891198152a"
	Nov 08 09:33:44 functional-356848 kubelet[4046]: E1108 09:33:44.084031    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r5z5q" podUID="a3b0889b-fc92-4e56-a6ad-aa99d85725a5"
	Nov 08 09:33:45 functional-356848 kubelet[4046]: I1108 09:33:45.637869    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38f6f0c8-df1f-4ea8-8225-9c07890d55b0-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-98cmp\" (UID: \"38f6f0c8-df1f-4ea8-8225-9c07890d55b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-98cmp"
	Nov 08 09:33:45 functional-356848 kubelet[4046]: I1108 09:33:45.638401    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/34edbe38-d09e-4825-b74f-1ee58e04cd46-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m2xtj\" (UID: \"34edbe38-d09e-4825-b74f-1ee58e04cd46\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj"
	Nov 08 09:33:45 functional-356848 kubelet[4046]: I1108 09:33:45.638526    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgt8l\" (UniqueName: \"kubernetes.io/projected/34edbe38-d09e-4825-b74f-1ee58e04cd46-kube-api-access-zgt8l\") pod \"kubernetes-dashboard-855c9754f9-m2xtj\" (UID: \"34edbe38-d09e-4825-b74f-1ee58e04cd46\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj"
	Nov 08 09:33:45 functional-356848 kubelet[4046]: I1108 09:33:45.638640    4046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bkn\" (UniqueName: \"kubernetes.io/projected/38f6f0c8-df1f-4ea8-8225-9c07890d55b0-kube-api-access-r7bkn\") pod \"dashboard-metrics-scraper-77bf4d6c4c-98cmp\" (UID: \"38f6f0c8-df1f-4ea8-8225-9c07890d55b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-98cmp"
	Nov 08 09:33:45 functional-356848 kubelet[4046]: W1108 09:33:45.841781    4046 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/305eee776cb83e63022be74ba47f0421f6d9598b8a24f6304af841bae11d4b16/crio-020e9e63367cc958551a013667e06bf9d30a26e48b8aa7e85a346e2e18081150 WatchSource:0}: Error finding container 020e9e63367cc958551a013667e06bf9d30a26e48b8aa7e85a346e2e18081150: Status 404 returned error can't find the container with id 020e9e63367cc958551a013667e06bf9d30a26e48b8aa7e85a346e2e18081150
	Nov 08 09:33:49 functional-356848 kubelet[4046]: E1108 09:33:49.084318    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6xs9" podUID="a4338934-a5f4-4216-ac2f-5d5494bb9632"
	Nov 08 09:33:50 functional-356848 kubelet[4046]: I1108 09:33:50.946271    4046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2xtj" podStartSLOduration=1.459433955 podStartE2EDuration="5.946243367s" podCreationTimestamp="2025-11-08 09:33:45 +0000 UTC" firstStartedPulling="2025-11-08 09:33:45.796109704 +0000 UTC m=+653.846412637" lastFinishedPulling="2025-11-08 09:33:50.282919034 +0000 UTC m=+658.333222049" observedRunningTime="2025-11-08 09:33:50.94578079 +0000 UTC m=+658.996083731" watchObservedRunningTime="2025-11-08 09:33:50.946243367 +0000 UTC m=+658.996546300"
	Nov 08 09:33:55 functional-356848 kubelet[4046]: E1108 09:33:55.084086    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r5z5q" podUID="a3b0889b-fc92-4e56-a6ad-aa99d85725a5"
	Nov 08 09:34:01 functional-356848 kubelet[4046]: E1108 09:34:01.085596    4046 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 08 09:34:01 functional-356848 kubelet[4046]: E1108 09:34:01.086043    4046 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 08 09:34:01 functional-356848 kubelet[4046]: E1108 09:34:01.086231    4046 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-f6xs9_default(a4338934-a5f4-4216-ac2f-5d5494bb9632): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Nov 08 09:34:01 functional-356848 kubelet[4046]: E1108 09:34:01.086345    4046 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6xs9" podUID="a4338934-a5f4-4216-ac2f-5d5494bb9632"
	
	
	==> kubernetes-dashboard [2a2ae5e3ea5402062e92bfa1b947368e5ecad6d3770ffc56c5264e2342a17017] <==
	2025/11/08 09:33:50 Using namespace: kubernetes-dashboard
	2025/11/08 09:33:50 Using in-cluster config to connect to apiserver
	2025/11/08 09:33:50 Using secret token for csrf signing
	2025/11/08 09:33:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:33:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:33:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:33:50 Generating JWE encryption key
	2025/11/08 09:33:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:33:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:33:50 Initializing JWE encryption key from synchronized object
	2025/11/08 09:33:50 Creating in-cluster Sidecar client
	2025/11/08 09:33:50 Serving insecurely on HTTP port: 9090
	2025/11/08 09:33:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:33:50 Starting overwatch
	
	
	==> storage-provisioner [381d08e6ac5df5b686e846b8df152ef51b10469b960a7098cf0449885546e4a1] <==
	W1108 09:33:38.326198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:40.329439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:40.336958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:42.340492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:42.345516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:44.348362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:44.356130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:46.359154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:46.366230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:48.369530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:48.374464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:50.377149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:50.384236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:52.387729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:52.392028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:54.395800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:54.401058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:56.405225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:56.415409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:58.419390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:33:58.424658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:34:00.437985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:34:00.452096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:34:02.455485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:34:02.460753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad5376b353cdd94dcfe720202342641d4f379020db1577d7c39d57a238493939] <==
	I1108 09:22:00.346374       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:22:00.380844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:22:00.380901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:22:00.384144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:22:03.838925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:22:08.099123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:22:11.697411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-356848 -n functional-356848
helpers_test.go:269: (dbg) Run:  kubectl --context functional-356848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-f6xs9 hello-node-connect-7d85dfc575-r5z5q
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-356848 describe pod busybox-mount hello-node-75c85bcc94-f6xs9 hello-node-connect-7d85dfc575-r5z5q
helpers_test.go:290: (dbg) kubectl --context functional-356848 describe pod busybox-mount hello-node-75c85bcc94-f6xs9 hello-node-connect-7d85dfc575-r5z5q:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-356848/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:33:33 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://498b5aad32cacb84adb538204d9ab42f2740afc2e8e66ac3709dcfe7d467c32e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 08 Nov 2025 09:33:35 +0000
	      Finished:     Sat, 08 Nov 2025 09:33:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xmgn4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xmgn4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  30s   default-scheduler  Successfully assigned default/busybox-mount to functional-356848
	  Normal  Pulling    30s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     28s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.224s (2.224s including waiting). Image size: 3774172 bytes.
	  Normal  Created    28s   kubelet            Created container: mount-munger
	  Normal  Started    28s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-f6xs9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-356848/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:23:23 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sb6vb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sb6vb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f6xs9 to functional-356848
	  Normal   Pulling    7m48s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m48s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m48s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    38s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     38s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-r5z5q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-356848/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:23:59 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh5pk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh5pk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r5z5q to functional-356848
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image load --daemon kicbase/echo-server:functional-356848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-356848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image load --daemon kicbase/echo-server:functional-356848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-356848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-356848
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image load --daemon kicbase/echo-server:functional-356848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-356848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-356848 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-356848 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-f6xs9" [a4338934-a5f4-4216-ac2f-5d5494bb9632] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-356848 -n functional-356848
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-08 09:33:23.709976119 +0000 UTC m=+1230.820901982
functional_test.go:1460: (dbg) Run:  kubectl --context functional-356848 describe po hello-node-75c85bcc94-f6xs9 -n default
functional_test.go:1460: (dbg) kubectl --context functional-356848 describe po hello-node-75c85bcc94-f6xs9 -n default:
Name:             hello-node-75c85bcc94-f6xs9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-356848/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:23:23 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sb6vb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sb6vb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f6xs9 to functional-356848
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-356848 logs hello-node-75c85bcc94-f6xs9 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-356848 logs hello-node-75c85bcc94-f6xs9 -n default: exit status 1 (112.032506ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-f6xs9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-356848 logs hello-node-75c85bcc94-f6xs9 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image save kicbase/echo-server:functional-356848 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1108 09:23:25.051804  317496 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:23:25.051980  317496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:23:25.051990  317496 out.go:374] Setting ErrFile to fd 2...
	I1108 09:23:25.051996  317496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:23:25.052380  317496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:23:25.053510  317496 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:23:25.053668  317496 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:23:25.054475  317496 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
	I1108 09:23:25.072559  317496 ssh_runner.go:195] Run: systemctl --version
	I1108 09:23:25.072621  317496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
	I1108 09:23:25.091700  317496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
	I1108 09:23:25.199546  317496 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1108 09:23:25.199603  317496 cache_images.go:255] Failed to load cached images for "functional-356848": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1108 09:23:25.199626  317496 cache_images.go:267] failed pushing to: functional-356848

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-356848
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image save --daemon kicbase/echo-server:functional-356848 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-356848
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-356848: exit status 1 (19.719938ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-356848

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-356848

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 service --namespace=default --https --url hello-node: exit status 115 (419.121167ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30877
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-356848 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 service hello-node --url --format={{.IP}}: exit status 115 (417.296916ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-356848 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 service hello-node --url: exit status 115 (393.758801ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30877
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-356848 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30877
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-269320 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-269320 --output=json --user=testUser: exit status 80 (1.859273131s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4a7b901-eb24-4e8d-8370-959113200efd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-269320 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2cf123a8-130d-4da3-b768-1bd73b3c98b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T09:46:23Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"8647370d-0a39-4c26-bc04-aea8e26ba8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-269320 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.86s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-269320 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-269320 --output=json --user=testUser: exit status 80 (1.693564184s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c7d594e-1694-44eb-a58f-04afee53e239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-269320 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"153db0c2-4a6d-4dca-93e7-e3193639dfff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T09:46:25Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"f91f64c3-40e5-4132-a411-27b27b538668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-269320 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.69s)

                                                
                                    
x
+
TestPause/serial/Pause (8.25s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-585281 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-585281 --alsologtostderr -v=5: exit status 80 (2.318334923s)

                                                
                                                
-- stdout --
	* Pausing node pause-585281 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:11:31.988694  463659 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:11:31.991695  463659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:31.991711  463659 out.go:374] Setting ErrFile to fd 2...
	I1108 10:11:31.991716  463659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:31.992029  463659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:11:31.992339  463659 out.go:368] Setting JSON to false
	I1108 10:11:31.992387  463659 mustload.go:66] Loading cluster: pause-585281
	I1108 10:11:31.992873  463659 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:31.993396  463659 cli_runner.go:164] Run: docker container inspect pause-585281 --format={{.State.Status}}
	I1108 10:11:32.027003  463659 host.go:66] Checking if "pause-585281" exists ...
	I1108 10:11:32.027372  463659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:11:32.141881  463659 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:11:32.129321107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:11:32.142604  463659 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-585281 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:11:32.145591  463659 out.go:179] * Pausing node pause-585281 ... 
	I1108 10:11:32.149353  463659 host.go:66] Checking if "pause-585281" exists ...
	I1108 10:11:32.149678  463659 ssh_runner.go:195] Run: systemctl --version
	I1108 10:11:32.149740  463659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-585281
	I1108 10:11:32.187844  463659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/pause-585281/id_rsa Username:docker}
	I1108 10:11:32.302842  463659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:11:32.320498  463659 pause.go:52] kubelet running: true
	I1108 10:11:32.320578  463659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:11:32.660416  463659 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:11:32.660496  463659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:11:32.785007  463659 cri.go:89] found id: "f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f"
	I1108 10:11:32.785034  463659 cri.go:89] found id: "9bd33a96a682a6bf1f4bd44fbc1b47163722c9b2f12470e8668831c357f338c0"
	I1108 10:11:32.785040  463659 cri.go:89] found id: "0f67a9dac80bea7b55941628cdf508d953ef6b68882be9639e87c90d34ad85c0"
	I1108 10:11:32.785044  463659 cri.go:89] found id: "6a1c4f1c1aebd9dc507372057dbea49115539de3328fd5fc6cc5c24ab0cfa8bf"
	I1108 10:11:32.785047  463659 cri.go:89] found id: "05f96289e87d54b856123e4c909df122cf5ea7cafa2a1ea3251fd71853a64ef1"
	I1108 10:11:32.785051  463659 cri.go:89] found id: "a5d01500a77310a76b4659b56909c387796572bf5f8c6be88ba3a86442f8ee91"
	I1108 10:11:32.785073  463659 cri.go:89] found id: "2e09acaec05a41e735e1fd9867f0a8dd659729440cdc0700cbf72474301bfae9"
	I1108 10:11:32.785084  463659 cri.go:89] found id: "da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	I1108 10:11:32.785088  463659 cri.go:89] found id: "5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a"
	I1108 10:11:32.785094  463659 cri.go:89] found id: "bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1"
	I1108 10:11:32.785098  463659 cri.go:89] found id: "cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c"
	I1108 10:11:32.785101  463659 cri.go:89] found id: "3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0"
	I1108 10:11:32.785105  463659 cri.go:89] found id: "f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7"
	I1108 10:11:32.785108  463659 cri.go:89] found id: "b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419"
	I1108 10:11:32.785111  463659 cri.go:89] found id: "3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78"
	I1108 10:11:32.785121  463659 cri.go:89] found id: ""
	I1108 10:11:32.785182  463659 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:11:32.797057  463659 retry.go:31] will retry after 187.824011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:32Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:11:32.985539  463659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:11:33.005038  463659 pause.go:52] kubelet running: false
	I1108 10:11:33.005164  463659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:11:33.283600  463659 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:11:33.283725  463659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:11:33.413115  463659 cri.go:89] found id: "f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f"
	I1108 10:11:33.413139  463659 cri.go:89] found id: "9bd33a96a682a6bf1f4bd44fbc1b47163722c9b2f12470e8668831c357f338c0"
	I1108 10:11:33.413145  463659 cri.go:89] found id: "0f67a9dac80bea7b55941628cdf508d953ef6b68882be9639e87c90d34ad85c0"
	I1108 10:11:33.413149  463659 cri.go:89] found id: "6a1c4f1c1aebd9dc507372057dbea49115539de3328fd5fc6cc5c24ab0cfa8bf"
	I1108 10:11:33.413152  463659 cri.go:89] found id: "05f96289e87d54b856123e4c909df122cf5ea7cafa2a1ea3251fd71853a64ef1"
	I1108 10:11:33.413156  463659 cri.go:89] found id: "a5d01500a77310a76b4659b56909c387796572bf5f8c6be88ba3a86442f8ee91"
	I1108 10:11:33.413159  463659 cri.go:89] found id: "2e09acaec05a41e735e1fd9867f0a8dd659729440cdc0700cbf72474301bfae9"
	I1108 10:11:33.413163  463659 cri.go:89] found id: "da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	I1108 10:11:33.413166  463659 cri.go:89] found id: "5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a"
	I1108 10:11:33.413172  463659 cri.go:89] found id: "bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1"
	I1108 10:11:33.413176  463659 cri.go:89] found id: "cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c"
	I1108 10:11:33.413179  463659 cri.go:89] found id: "3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0"
	I1108 10:11:33.413183  463659 cri.go:89] found id: "f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7"
	I1108 10:11:33.413186  463659 cri.go:89] found id: "b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419"
	I1108 10:11:33.413197  463659 cri.go:89] found id: "3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78"
	I1108 10:11:33.413205  463659 cri.go:89] found id: ""
	I1108 10:11:33.413266  463659 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:11:33.433310  463659 retry.go:31] will retry after 354.076767ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:33Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:11:33.787703  463659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:11:33.804761  463659 pause.go:52] kubelet running: false
	I1108 10:11:33.804882  463659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:11:34.046785  463659 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:11:34.046910  463659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:11:34.180868  463659 cri.go:89] found id: "f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f"
	I1108 10:11:34.180947  463659 cri.go:89] found id: "9bd33a96a682a6bf1f4bd44fbc1b47163722c9b2f12470e8668831c357f338c0"
	I1108 10:11:34.180967  463659 cri.go:89] found id: "0f67a9dac80bea7b55941628cdf508d953ef6b68882be9639e87c90d34ad85c0"
	I1108 10:11:34.180987  463659 cri.go:89] found id: "6a1c4f1c1aebd9dc507372057dbea49115539de3328fd5fc6cc5c24ab0cfa8bf"
	I1108 10:11:34.181007  463659 cri.go:89] found id: "05f96289e87d54b856123e4c909df122cf5ea7cafa2a1ea3251fd71853a64ef1"
	I1108 10:11:34.181027  463659 cri.go:89] found id: "a5d01500a77310a76b4659b56909c387796572bf5f8c6be88ba3a86442f8ee91"
	I1108 10:11:34.181068  463659 cri.go:89] found id: "2e09acaec05a41e735e1fd9867f0a8dd659729440cdc0700cbf72474301bfae9"
	I1108 10:11:34.181089  463659 cri.go:89] found id: "da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	I1108 10:11:34.181109  463659 cri.go:89] found id: "5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a"
	I1108 10:11:34.181136  463659 cri.go:89] found id: "bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1"
	I1108 10:11:34.181154  463659 cri.go:89] found id: "cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c"
	I1108 10:11:34.181182  463659 cri.go:89] found id: "3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0"
	I1108 10:11:34.181199  463659 cri.go:89] found id: "f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7"
	I1108 10:11:34.181235  463659 cri.go:89] found id: "b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419"
	I1108 10:11:34.181255  463659 cri.go:89] found id: "3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78"
	I1108 10:11:34.181275  463659 cri.go:89] found id: ""
	I1108 10:11:34.181345  463659 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:11:34.202261  463659 out.go:203] 
	W1108 10:11:34.205246  463659 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:11:34.205342  463659 out.go:285] * 
	* 
	W1108 10:11:34.212531  463659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:11:34.215497  463659 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-585281 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-585281
helpers_test.go:243: (dbg) docker inspect pause-585281:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf",
	        "Created": "2025-11-08T10:08:09.247240488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:08:09.316225424Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/hosts",
	        "LogPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf-json.log",
	        "Name": "/pause-585281",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-585281:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-585281",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf",
	                "LowerDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-585281",
	                "Source": "/var/lib/docker/volumes/pause-585281/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-585281",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-585281",
	                "name.minikube.sigs.k8s.io": "pause-585281",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73886e444b8a2b0313385b030dfeae4a204f5068c0997ceba71405a4e4409596",
	            "SandboxKey": "/var/run/docker/netns/73886e444b8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-585281": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:60:1d:21:c7:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b3f2e47b845c3ff917bce851c0bd47d7afec62b040cb09ff0c6d64329a932166",
	                    "EndpointID": "343f37a67d1f795e3528e52f33450bacf315a13d56f10436ee6ea1936d6c5361",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-585281",
	                        "5222e74c7831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-585281 -n pause-585281
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-585281 -n pause-585281: exit status 2 (471.804907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-585281 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-585281 logs -n 25: (1.870096961s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-099098 sudo systemctl cat kubelet --no-pager                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status docker --all --full --no-pager                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat docker --no-pager                                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/docker/daemon.json                                                          │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo docker system info                                                                   │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cri-dockerd --version                                                                │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat containerd --no-pager                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/containerd/config.toml                                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo containerd config dump                                                               │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status crio --all --full --no-pager                                        │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat crio --no-pager                                                        │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo crio config                                                                          │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                     │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:11:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:11:00.406326  461087 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:11:00.406488  461087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:00.406494  461087 out.go:374] Setting ErrFile to fd 2...
	I1108 10:11:00.406500  461087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:00.406803  461087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:11:00.407274  461087 out.go:368] Setting JSON to false
	I1108 10:11:00.408277  461087 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10410,"bootTime":1762586251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:11:00.408373  461087 start.go:143] virtualization:  
	I1108 10:11:00.418369  461087 out.go:179] * [force-systemd-env-000082] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:11:00.422598  461087 notify.go:221] Checking for updates...
	I1108 10:11:00.422594  461087 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:11:00.437375  461087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:11:00.440665  461087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:11:00.443861  461087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:11:00.446999  461087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:11:00.450064  461087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1108 10:11:00.454705  461087 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:00.454944  461087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:11:00.480333  461087 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:11:00.480553  461087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:11:00.546813  461087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:11:00.536445799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:11:00.546933  461087 docker.go:319] overlay module found
	I1108 10:11:00.550385  461087 out.go:179] * Using the docker driver based on user configuration
	I1108 10:11:00.553277  461087 start.go:309] selected driver: docker
	I1108 10:11:00.553311  461087 start.go:930] validating driver "docker" against <nil>
	I1108 10:11:00.553326  461087 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:11:00.554174  461087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:11:00.625624  461087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:11:00.615875633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:11:00.625781  461087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:11:00.626009  461087 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 10:11:00.629022  461087 out.go:179] * Using Docker driver with root privileges
	I1108 10:11:00.631812  461087 cni.go:84] Creating CNI manager for ""
	I1108 10:11:00.631873  461087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:00.631889  461087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:11:00.631965  461087 start.go:353] cluster config:
	{Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:00.635028  461087 out.go:179] * Starting "force-systemd-env-000082" primary control-plane node in "force-systemd-env-000082" cluster
	I1108 10:11:00.637777  461087 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:11:00.640603  461087 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:11:00.643430  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:00.643481  461087 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:11:00.643493  461087 cache.go:59] Caching tarball of preloaded images
	I1108 10:11:00.643522  461087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:11:00.643595  461087 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:11:00.643605  461087 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:11:00.643713  461087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json ...
	I1108 10:11:00.643731  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json: {Name:mkcf8aede9b26585f077e4eebfc7536476dbedc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:00.663385  461087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:11:00.663407  461087 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:11:00.663427  461087 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:11:00.663452  461087 start.go:360] acquireMachinesLock for force-systemd-env-000082: {Name:mk9ade79be79f220f84147f63436e59c2fb21cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:11:00.663577  461087 start.go:364] duration metric: took 108.8µs to acquireMachinesLock for "force-systemd-env-000082"
	I1108 10:11:00.663604  461087 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:11:00.663676  461087 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:11:00.667124  461087 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:11:00.667378  461087 start.go:159] libmachine.API.Create for "force-systemd-env-000082" (driver="docker")
	I1108 10:11:00.667412  461087 client.go:173] LocalClient.Create starting
	I1108 10:11:00.667494  461087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:11:00.667538  461087 main.go:143] libmachine: Decoding PEM data...
	I1108 10:11:00.667554  461087 main.go:143] libmachine: Parsing certificate...
	I1108 10:11:00.667637  461087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:11:00.667654  461087 main.go:143] libmachine: Decoding PEM data...
	I1108 10:11:00.667663  461087 main.go:143] libmachine: Parsing certificate...
	I1108 10:11:00.668055  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:11:00.682966  461087 cli_runner.go:211] docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:11:00.683049  461087 network_create.go:284] running [docker network inspect force-systemd-env-000082] to gather additional debugging logs...
	I1108 10:11:00.683066  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082
	W1108 10:11:00.699497  461087 cli_runner.go:211] docker network inspect force-systemd-env-000082 returned with exit code 1
	I1108 10:11:00.699527  461087 network_create.go:287] error running [docker network inspect force-systemd-env-000082]: docker network inspect force-systemd-env-000082: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-000082 not found
	I1108 10:11:00.699542  461087 network_create.go:289] output of [docker network inspect force-systemd-env-000082]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-000082 not found
	
	** /stderr **
	I1108 10:11:00.699654  461087 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:00.714586  461087 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:11:00.714979  461087 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:11:00.715353  461087 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:11:00.715640  461087 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b3f2e47b845c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:91:00:2f:ef:e8} reservation:<nil>}
	I1108 10:11:00.716040  461087 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018e1f30}
	I1108 10:11:00.716071  461087 network_create.go:124] attempt to create docker network force-systemd-env-000082 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:11:00.716134  461087 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-000082 force-systemd-env-000082
	I1108 10:11:00.777285  461087 network_create.go:108] docker network force-systemd-env-000082 192.168.85.0/24 created
	I1108 10:11:00.777328  461087 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-000082" container
	I1108 10:11:00.777417  461087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:11:00.794186  461087 cli_runner.go:164] Run: docker volume create force-systemd-env-000082 --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:11:00.812081  461087 oci.go:103] Successfully created a docker volume force-systemd-env-000082
	I1108 10:11:00.812177  461087 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-000082-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --entrypoint /usr/bin/test -v force-systemd-env-000082:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:11:01.352554  461087 oci.go:107] Successfully prepared a docker volume force-systemd-env-000082
	I1108 10:11:01.352602  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:01.352623  461087 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:11:01.352687  461087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-000082:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:11:07.669085  455293 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.422075994s)
	I1108 10:11:07.669109  455293 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:11:07.669181  455293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:11:07.673548  455293 start.go:564] Will wait 60s for crictl version
	I1108 10:11:07.673616  455293 ssh_runner.go:195] Run: which crictl
	I1108 10:11:07.677374  455293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:11:07.719347  455293 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:11:07.719455  455293 ssh_runner.go:195] Run: crio --version
	I1108 10:11:07.761133  455293 ssh_runner.go:195] Run: crio --version
	I1108 10:11:07.805662  455293 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:11:07.809447  455293 cli_runner.go:164] Run: docker network inspect pause-585281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:07.831439  455293 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:11:07.835898  455293 kubeadm.go:884] updating cluster {Name:pause-585281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:11:07.836060  455293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:07.836115  455293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:07.898839  455293 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:07.898866  455293 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:11:07.898922  455293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:07.935776  455293 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:07.935795  455293 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:11:07.935803  455293 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:11:07.935903  455293 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-585281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:11:07.935981  455293 ssh_runner.go:195] Run: crio config
	I1108 10:11:08.020485  455293 cni.go:84] Creating CNI manager for ""
	I1108 10:11:08.020562  455293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:08.020603  455293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:11:08.020655  455293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-585281 NodeName:pause-585281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:11:08.020830  455293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-585281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:11:08.020970  455293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:11:08.032570  455293 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:11:08.032650  455293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:11:08.042201  455293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 10:11:08.056956  455293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:11:08.073469  455293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 10:11:08.089678  455293 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:11:08.094342  455293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:08.257824  455293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:08.276280  455293 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281 for IP: 192.168.76.2
	I1108 10:11:08.276314  455293 certs.go:195] generating shared ca certs ...
	I1108 10:11:08.276348  455293 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:08.276535  455293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:11:08.276606  455293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:11:08.276620  455293 certs.go:257] generating profile certs ...
	I1108 10:11:08.276744  455293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key
	I1108 10:11:08.276834  455293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.key.9382e487
	I1108 10:11:08.276882  455293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.key
	I1108 10:11:08.277027  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:11:08.277062  455293 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:11:08.277075  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:11:08.277098  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:11:08.277122  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:11:08.277152  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:11:08.277204  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:08.277832  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:11:08.300354  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:11:08.319515  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:11:08.338594  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:11:08.357750  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 10:11:08.375430  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:11:08.396316  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:11:08.413963  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:11:08.431284  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:11:08.449747  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:11:08.468312  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:11:08.485945  455293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:11:08.499042  455293 ssh_runner.go:195] Run: openssl version
	I1108 10:11:08.505560  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:11:08.514556  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.518373  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.518477  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.561151  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:11:08.569363  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:11:08.577990  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.581925  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.581990  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.623118  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:11:08.631412  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:11:08.639954  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.643875  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.643953  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.685452  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:11:08.693820  455293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:11:08.697813  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:11:08.741957  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:11:08.783192  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:11:08.824380  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:11:08.865441  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:11:08.906862  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:11:08.948313  455293 kubeadm.go:401] StartCluster: {Name:pause-585281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:08.948437  455293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:11:08.948505  455293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:11:08.976261  455293 cri.go:89] found id: "da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	I1108 10:11:08.976284  455293 cri.go:89] found id: "5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a"
	I1108 10:11:08.976289  455293 cri.go:89] found id: "bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1"
	I1108 10:11:08.976293  455293 cri.go:89] found id: "cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c"
	I1108 10:11:08.976296  455293 cri.go:89] found id: "3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0"
	I1108 10:11:08.976299  455293 cri.go:89] found id: "f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7"
	I1108 10:11:08.976302  455293 cri.go:89] found id: "b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419"
	I1108 10:11:08.976305  455293 cri.go:89] found id: "d480e4d9a291fe52ec9ea2c2b32ab9c33154b183a934ae4982f262482e10f6b2"
	I1108 10:11:08.976308  455293 cri.go:89] found id: "f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226"
	I1108 10:11:08.976316  455293 cri.go:89] found id: "3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78"
	I1108 10:11:08.976319  455293 cri.go:89] found id: "bb3bc1b7161e709f57eb7e833763492d5028f77a4eda96f6b8dc67a64c5adfc1"
	I1108 10:11:08.976322  455293 cri.go:89] found id: "bbe481630744841ccaed0db9fce6bb52bc510b3db87e24ad66ea3eebe37bebe9"
	I1108 10:11:08.976325  455293 cri.go:89] found id: "479d30e3e53d7f09520a9e0325d8e785b53081f9b1f26424b87e0c9430a03b2e"
	I1108 10:11:08.976329  455293 cri.go:89] found id: "c0521ccb74deafa2ff1de55dcde0a8e896e81ca44ba308ff269d6f5a89c789ed"
	I1108 10:11:08.976332  455293 cri.go:89] found id: ""
	I1108 10:11:08.976381  455293 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:11:08.987361  455293 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:08Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:11:08.987461  455293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:11:08.995180  455293 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:11:08.995250  455293 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:11:08.995347  455293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:11:09.002917  455293 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:11:09.003554  455293 kubeconfig.go:125] found "pause-585281" server: "https://192.168.76.2:8443"
	I1108 10:11:09.004304  455293 kapi.go:59] client config for pause-585281: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key", CAFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:11:09.004848  455293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 10:11:09.004862  455293 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 10:11:09.004867  455293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 10:11:09.004872  455293 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 10:11:09.004876  455293 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 10:11:09.005320  455293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:11:09.015119  455293 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:11:09.015203  455293 kubeadm.go:602] duration metric: took 19.941505ms to restartPrimaryControlPlane
	I1108 10:11:09.015220  455293 kubeadm.go:403] duration metric: took 66.917587ms to StartCluster
	I1108 10:11:09.015237  455293 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:09.015322  455293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:11:09.015921  455293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:09.016183  455293 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:11:09.016577  455293 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:09.016568  455293 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:11:09.021733  455293 out.go:179] * Verifying Kubernetes components...
	I1108 10:11:09.021896  455293 out.go:179] * Enabled addons: 
	I1108 10:11:05.782843  461087 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-000082:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.430114299s)
	I1108 10:11:05.782875  461087 kic.go:203] duration metric: took 4.430250176s to extract preloaded images to volume ...
	W1108 10:11:05.783020  461087 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:11:05.783164  461087 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:11:05.833440  461087 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-000082 --name force-systemd-env-000082 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-000082 --network force-systemd-env-000082 --ip 192.168.85.2 --volume force-systemd-env-000082:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:11:06.168131  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Running}}
	I1108 10:11:06.193255  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.218300  461087 cli_runner.go:164] Run: docker exec force-systemd-env-000082 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:11:06.288768  461087 oci.go:144] the created container "force-systemd-env-000082" has a running status.
	I1108 10:11:06.288794  461087 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa...
	I1108 10:11:06.636622  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1108 10:11:06.636673  461087 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:11:06.666118  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.684032  461087 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:11:06.684052  461087 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-000082 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:11:06.725380  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.745548  461087 machine.go:94] provisionDockerMachine start ...
	I1108 10:11:06.745641  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:06.764979  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:06.765319  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:06.765335  461087 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:11:06.765987  461087 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:11:09.924752  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-000082
	
	I1108 10:11:09.924782  461087 ubuntu.go:182] provisioning hostname "force-systemd-env-000082"
	I1108 10:11:09.924893  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:09.942938  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:09.943242  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:09.943261  461087 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-000082 && echo "force-systemd-env-000082" | sudo tee /etc/hostname
	I1108 10:11:10.111163  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-000082
	
	I1108 10:11:10.111249  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:10.129436  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:10.129765  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:10.129789  461087 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-000082' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-000082/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-000082' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:11:10.285277  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:11:10.285361  461087 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:11:10.285411  461087 ubuntu.go:190] setting up certificates
	I1108 10:11:10.285439  461087 provision.go:84] configureAuth start
	I1108 10:11:10.285525  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:10.314006  461087 provision.go:143] copyHostCerts
	I1108 10:11:10.314054  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:11:10.314091  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:11:10.314103  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:11:10.314184  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:11:10.314284  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:11:10.314306  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:11:10.314311  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:11:10.314350  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:11:10.314405  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:11:10.314426  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:11:10.314436  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:11:10.314461  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:11:10.314514  461087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-000082 san=[127.0.0.1 192.168.85.2 force-systemd-env-000082 localhost minikube]
	I1108 10:11:09.024612  455293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:09.024785  455293 addons.go:515] duration metric: took 8.212158ms for enable addons: enabled=[]
	I1108 10:11:09.166948  455293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:09.181644  455293 node_ready.go:35] waiting up to 6m0s for node "pause-585281" to be "Ready" ...
	W1108 10:11:11.182982  455293 node_ready.go:55] error getting node "pause-585281" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-585281": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:11:10.913549  461087 provision.go:177] copyRemoteCerts
	I1108 10:11:10.913641  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:11:10.913681  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:10.931686  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.040579  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1108 10:11:11.040679  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:11:11.058691  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1108 10:11:11.058758  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 10:11:11.077462  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1108 10:11:11.077583  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:11:11.096772  461087 provision.go:87] duration metric: took 811.295957ms to configureAuth
	I1108 10:11:11.096801  461087 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:11:11.097131  461087 config.go:182] Loaded profile config "force-systemd-env-000082": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:11.097280  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.115517  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:11.115841  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:11.115863  461087 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:11:11.379381  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:11:11.379405  461087 machine.go:97] duration metric: took 4.633836912s to provisionDockerMachine
	I1108 10:11:11.379415  461087 client.go:176] duration metric: took 10.711996117s to LocalClient.Create
	I1108 10:11:11.379429  461087 start.go:167] duration metric: took 10.71205256s to libmachine.API.Create "force-systemd-env-000082"
	I1108 10:11:11.379451  461087 start.go:293] postStartSetup for "force-systemd-env-000082" (driver="docker")
	I1108 10:11:11.379465  461087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:11:11.379526  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:11:11.379566  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.396097  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.501247  461087 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:11:11.504766  461087 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:11:11.504803  461087 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:11:11.504829  461087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:11:11.504927  461087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:11:11.505017  461087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:11:11.505030  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> /etc/ssl/certs/2940852.pem
	I1108 10:11:11.505145  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:11:11.512575  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:11.530148  461087 start.go:296] duration metric: took 150.661795ms for postStartSetup
	I1108 10:11:11.530608  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:11.550697  461087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json ...
	I1108 10:11:11.550982  461087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:11:11.551042  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.567632  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.669953  461087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:11:11.674599  461087 start.go:128] duration metric: took 11.010907643s to createHost
	I1108 10:11:11.674625  461087 start.go:83] releasing machines lock for "force-systemd-env-000082", held for 11.011038548s
	I1108 10:11:11.674697  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:11.692356  461087 ssh_runner.go:195] Run: cat /version.json
	I1108 10:11:11.692417  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.692724  461087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:11:11.692829  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.709623  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.711506  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.812634  461087 ssh_runner.go:195] Run: systemctl --version
	I1108 10:11:11.934735  461087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:11:11.971210  461087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:11:11.975568  461087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:11:11.975685  461087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:11:12.006233  461087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:11:12.006264  461087 start.go:496] detecting cgroup driver to use...
	I1108 10:11:12.006283  461087 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1108 10:11:12.006354  461087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:11:12.025481  461087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:11:12.038801  461087 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:11:12.038872  461087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:11:12.055059  461087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:11:12.074011  461087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:11:12.185445  461087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:11:12.301339  461087 docker.go:234] disabling docker service ...
	I1108 10:11:12.301477  461087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:11:12.325427  461087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:11:12.339708  461087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:11:12.454839  461087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:11:12.574289  461087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:11:12.587706  461087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:11:12.602901  461087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:11:12.602969  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.612825  461087 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 10:11:12.612893  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.622569  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.631687  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.640997  461087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:11:12.649046  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.657995  461087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.672011  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.681676  461087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:11:12.689968  461087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:11:12.697628  461087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:12.828470  461087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:11:12.960357  461087 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:11:12.960430  461087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:11:12.964839  461087 start.go:564] Will wait 60s for crictl version
	I1108 10:11:12.965116  461087 ssh_runner.go:195] Run: which crictl
	I1108 10:11:12.969245  461087 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:11:12.999883  461087 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:11:13.000021  461087 ssh_runner.go:195] Run: crio --version
	I1108 10:11:13.030652  461087 ssh_runner.go:195] Run: crio --version
	I1108 10:11:13.064905  461087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:11:13.067935  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:13.084631  461087 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:11:13.088857  461087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:11:13.098866  461087 kubeadm.go:884] updating cluster {Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:11:13.098978  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:13.099032  461087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:13.132542  461087 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:13.132568  461087 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:11:13.132638  461087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:13.160282  461087 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:13.160313  461087 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:11:13.160322  461087 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:11:13.160416  461087 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-000082 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:11:13.160514  461087 ssh_runner.go:195] Run: crio config
	I1108 10:11:13.229145  461087 cni.go:84] Creating CNI manager for ""
	I1108 10:11:13.229169  461087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:13.229221  461087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:11:13.229252  461087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-000082 NodeName:force-systemd-env-000082 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:11:13.229426  461087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-000082"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:11:13.229502  461087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:11:13.237396  461087 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:11:13.237486  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:11:13.245155  461087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1108 10:11:13.257964  461087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:11:13.271455  461087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1108 10:11:13.284809  461087 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:11:13.288678  461087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:11:13.298531  461087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:13.410061  461087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:13.427024  461087 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082 for IP: 192.168.85.2
	I1108 10:11:13.427047  461087 certs.go:195] generating shared ca certs ...
	I1108 10:11:13.427063  461087 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:13.427231  461087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:11:13.427290  461087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:11:13.427303  461087 certs.go:257] generating profile certs ...
	I1108 10:11:13.427379  461087 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key
	I1108 10:11:13.427407  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt with IP's: []
	I1108 10:11:14.886388  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt ...
	I1108 10:11:14.886472  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt: {Name:mk7f289941456bbbd39d15b5d1963e1264c2c34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:14.886713  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key ...
	I1108 10:11:14.886755  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key: {Name:mk683f7ffff9f8c8a187abefa434a8f8bdceb939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:14.886901  461087 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85
	I1108 10:11:14.886950  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:11:15.443000  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 ...
	I1108 10:11:15.443073  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85: {Name:mk0d01ffe0e7573b870f75d8ad0164e2457a62ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.443305  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85 ...
	I1108 10:11:15.443347  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85: {Name:mkd2d89b1a6b561c7b22e7a3941d813e824c1c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.443490  461087 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt
	I1108 10:11:15.443609  461087 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key
	I1108 10:11:15.443717  461087 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key
	I1108 10:11:15.443766  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt with IP's: []
	I1108 10:11:15.526769  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt ...
	I1108 10:11:15.526799  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt: {Name:mkd285793896fa812cc3f297cc97019a110d8562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.526968  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key ...
	I1108 10:11:15.526977  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key: {Name:mka6d03fc97e946d4827f6237d0e1f5b50945bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.527048  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1108 10:11:15.527067  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1108 10:11:15.527079  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1108 10:11:15.527090  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1108 10:11:15.527101  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1108 10:11:15.527112  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1108 10:11:15.527124  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1108 10:11:15.527135  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1108 10:11:15.527184  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:11:15.527218  461087 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:11:15.527226  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:11:15.527249  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:11:15.527273  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:11:15.527294  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:11:15.527335  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:15.527368  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:15.527380  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem -> /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.527391  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> /usr/share/ca-certificates/2940852.pem
	I1108 10:11:15.527913  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:11:15.562864  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:11:15.598764  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:11:15.630728  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:11:15.670278  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1108 10:11:15.694941  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:11:15.722936  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:11:15.758742  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:11:15.790979  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:11:15.824566  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:11:15.860753  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:11:15.892651  461087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:11:15.907089  461087 ssh_runner.go:195] Run: openssl version
	I1108 10:11:15.913777  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:11:15.922992  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.927159  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.927241  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.988583  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:11:15.998184  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:11:16.018199  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.023319  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.023396  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.099563  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:11:16.112748  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:11:16.122525  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.126762  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.126846  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.169756  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:11:16.178943  461087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:11:16.183395  461087 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:11:16.183456  461087 kubeadm.go:401] StartCluster: {Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:16.183531  461087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:11:16.183592  461087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:11:16.221185  461087 cri.go:89] found id: ""
	I1108 10:11:16.221277  461087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:11:16.236076  461087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:11:16.250038  461087 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:11:16.250115  461087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:11:16.262461  461087 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:11:16.262482  461087 kubeadm.go:158] found existing configuration files:
	
	I1108 10:11:16.262541  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:11:16.279112  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:11:16.279184  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:11:16.293475  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:11:16.302520  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:11:16.302592  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:11:16.310230  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:11:16.322393  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:11:16.322468  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:11:16.332743  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:11:16.341194  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:11:16.341277  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:11:16.354246  461087 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:11:16.441249  461087 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:11:16.441358  461087 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:11:16.485431  461087 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:11:16.485518  461087 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:11:16.485570  461087 kubeadm.go:319] OS: Linux
	I1108 10:11:16.485622  461087 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:11:16.485694  461087 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:11:16.485761  461087 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:11:16.485826  461087 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:11:16.485896  461087 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:11:16.485962  461087 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:11:16.486028  461087 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:11:16.486095  461087 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:11:16.486158  461087 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:11:16.622249  461087 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:11:16.622372  461087 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:11:16.622474  461087 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:11:16.636080  461087 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:11:13.183027  455293 node_ready.go:55] error getting node "pause-585281" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-585281": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:11:16.642337  461087 out.go:252]   - Generating certificates and keys ...
	I1108 10:11:16.642447  461087 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:11:16.642523  461087 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:11:17.231203  461087 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:11:18.220482  461087 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:11:19.189234  461087 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:11:19.639770  461087 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:11:20.256254  461087 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:11:20.256787  461087 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-000082 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:11:20.685366  461087 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:11:20.685872  461087 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-000082 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:11:21.427075  461087 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:11:22.824563  461087 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:11:23.988868  461087 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:11:23.989472  461087 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:11:24.853251  461087 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:11:22.671439  455293 node_ready.go:49] node "pause-585281" is "Ready"
	I1108 10:11:22.671469  455293 node_ready.go:38] duration metric: took 13.489787634s for node "pause-585281" to be "Ready" ...
	I1108 10:11:22.671482  455293 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:11:22.671540  455293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:11:22.693535  455293 api_server.go:72] duration metric: took 13.677316944s to wait for apiserver process to appear ...
	I1108 10:11:22.693558  455293 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:11:22.693579  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:22.747124  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:22.747209  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:23.193697  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:23.240203  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:23.240292  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:23.693686  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:23.709710  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:23.709741  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:24.193982  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:24.230486  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:24.230574  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:24.694063  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:24.721902  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:11:24.726028  455293 api_server.go:141] control plane version: v1.34.1
	I1108 10:11:24.726106  455293 api_server.go:131] duration metric: took 2.032539244s to wait for apiserver health ...
	I1108 10:11:24.726138  455293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:11:24.734146  455293 system_pods.go:59] 7 kube-system pods found
	I1108 10:11:24.734231  455293 system_pods.go:61] "coredns-66bc5c9577-6644g" [7a079b8a-6641-49c0-9045-67e660dfa443] Running
	I1108 10:11:24.734257  455293 system_pods.go:61] "etcd-pause-585281" [e75af0d5-3d47-45d7-8cc3-179065325573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:11:24.734280  455293 system_pods.go:61] "kindnet-rvgcd" [66cb6e5e-6ba4-4952-85dc-37b05e46b000] Running
	I1108 10:11:24.734321  455293 system_pods.go:61] "kube-apiserver-pause-585281" [e2a58e2e-8d62-413d-9269-873c844d5b6c] Running
	I1108 10:11:24.734341  455293 system_pods.go:61] "kube-controller-manager-pause-585281" [571f0020-7fbb-4bb9-bbdc-fd0fb7735d17] Running
	I1108 10:11:24.734364  455293 system_pods.go:61] "kube-proxy-rv4j7" [81d952b7-1238-49d3-9e92-b4878ef4b207] Running
	I1108 10:11:24.734395  455293 system_pods.go:61] "kube-scheduler-pause-585281" [f343c463-edb4-432b-b1e6-1e9b1b4f1eed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:11:24.734421  455293 system_pods.go:74] duration metric: took 8.263466ms to wait for pod list to return data ...
	I1108 10:11:24.734444  455293 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:11:24.740290  455293 default_sa.go:45] found service account: "default"
	I1108 10:11:24.740349  455293 default_sa.go:55] duration metric: took 5.874633ms for default service account to be created ...
	I1108 10:11:24.740381  455293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:11:24.746824  455293 system_pods.go:86] 7 kube-system pods found
	I1108 10:11:24.746900  455293 system_pods.go:89] "coredns-66bc5c9577-6644g" [7a079b8a-6641-49c0-9045-67e660dfa443] Running
	I1108 10:11:24.746923  455293 system_pods.go:89] "etcd-pause-585281" [e75af0d5-3d47-45d7-8cc3-179065325573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:11:24.746942  455293 system_pods.go:89] "kindnet-rvgcd" [66cb6e5e-6ba4-4952-85dc-37b05e46b000] Running
	I1108 10:11:24.746974  455293 system_pods.go:89] "kube-apiserver-pause-585281" [e2a58e2e-8d62-413d-9269-873c844d5b6c] Running
	I1108 10:11:24.747001  455293 system_pods.go:89] "kube-controller-manager-pause-585281" [571f0020-7fbb-4bb9-bbdc-fd0fb7735d17] Running
	I1108 10:11:24.747024  455293 system_pods.go:89] "kube-proxy-rv4j7" [81d952b7-1238-49d3-9e92-b4878ef4b207] Running
	I1108 10:11:24.747057  455293 system_pods.go:89] "kube-scheduler-pause-585281" [f343c463-edb4-432b-b1e6-1e9b1b4f1eed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:11:24.747088  455293 system_pods.go:126] duration metric: took 6.686705ms to wait for k8s-apps to be running ...
	I1108 10:11:24.747112  455293 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:11:24.747195  455293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:11:24.870184  455293 system_svc.go:56] duration metric: took 123.06236ms WaitForService to wait for kubelet
	I1108 10:11:24.870260  455293 kubeadm.go:587] duration metric: took 15.854045048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:11:24.870300  455293 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:11:24.880868  455293 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:11:24.880963  455293 node_conditions.go:123] node cpu capacity is 2
	I1108 10:11:24.880992  455293 node_conditions.go:105] duration metric: took 10.669113ms to run NodePressure ...
	I1108 10:11:24.881018  455293 start.go:242] waiting for startup goroutines ...
	I1108 10:11:24.881055  455293 start.go:247] waiting for cluster config update ...
	I1108 10:11:24.881079  455293 start.go:256] writing updated cluster config ...
	I1108 10:11:24.881456  455293 ssh_runner.go:195] Run: rm -f paused
	I1108 10:11:24.893697  455293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:11:24.894328  455293 kapi.go:59] client config for pause-585281: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key", CAFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:11:24.899661  455293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6644g" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:24.913961  455293 pod_ready.go:94] pod "coredns-66bc5c9577-6644g" is "Ready"
	I1108 10:11:24.914036  455293 pod_ready.go:86] duration metric: took 14.30372ms for pod "coredns-66bc5c9577-6644g" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:24.918554  455293 pod_ready.go:83] waiting for pod "etcd-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:11:26.927782  455293 pod_ready.go:104] pod "etcd-pause-585281" is not "Ready", error: <nil>
	I1108 10:11:25.481258  461087 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:11:27.408297  461087 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:11:28.452119  461087 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:11:28.798002  461087 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:11:28.798585  461087 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:11:28.803700  461087 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:11:28.807166  461087 out.go:252]   - Booting up control plane ...
	I1108 10:11:28.807297  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:11:28.807379  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:11:28.807463  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:11:28.827083  461087 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:11:28.827356  461087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:11:28.834934  461087 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:11:28.835260  461087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:11:28.835548  461087 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:11:28.970331  461087 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:11:28.970455  461087 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:11:29.973338  461087 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000817749s
	I1108 10:11:29.974647  461087 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:11:29.974746  461087 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1108 10:11:29.974846  461087 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:11:29.974937  461087 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:11:29.425097  455293 pod_ready.go:104] pod "etcd-pause-585281" is not "Ready", error: <nil>
	I1108 10:11:30.924468  455293 pod_ready.go:94] pod "etcd-pause-585281" is "Ready"
	I1108 10:11:30.924494  455293 pod_ready.go:86] duration metric: took 6.005867883s for pod "etcd-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.929673  455293 pod_ready.go:83] waiting for pod "kube-apiserver-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.934432  455293 pod_ready.go:94] pod "kube-apiserver-pause-585281" is "Ready"
	I1108 10:11:30.934455  455293 pod_ready.go:86] duration metric: took 4.75965ms for pod "kube-apiserver-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.939412  455293 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.944771  455293 pod_ready.go:94] pod "kube-controller-manager-pause-585281" is "Ready"
	I1108 10:11:30.944846  455293 pod_ready.go:86] duration metric: took 5.400941ms for pod "kube-controller-manager-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.947350  455293 pod_ready.go:83] waiting for pod "kube-proxy-rv4j7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.123116  455293 pod_ready.go:94] pod "kube-proxy-rv4j7" is "Ready"
	I1108 10:11:31.123195  455293 pod_ready.go:86] duration metric: took 175.771915ms for pod "kube-proxy-rv4j7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.322617  455293 pod_ready.go:83] waiting for pod "kube-scheduler-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.722018  455293 pod_ready.go:94] pod "kube-scheduler-pause-585281" is "Ready"
	I1108 10:11:31.722093  455293 pod_ready.go:86] duration metric: took 399.399591ms for pod "kube-scheduler-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.722121  455293 pod_ready.go:40] duration metric: took 6.828342603s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:11:31.831158  455293 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:11:31.834395  455293 out.go:179] * Done! kubectl is now configured to use "pause-585281" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:11:14 pause-585281 crio[2155]: time="2025-11-08T10:11:14.041319447Z" level=info msg="Removed container f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226: kube-system/kindnet-rvgcd/kindnet-cni" id=6964c22b-aef3-43df-a9a1-79d0643128a6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.354173036Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd8884c0-2b18-4bae-9c8d-5586ec888a97 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.355594219Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3daa9f7-f586-427c-97d2-18f88506f0e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.3567696Z" level=info msg="Creating container: kube-system/kube-proxy-rv4j7/kube-proxy" id=47761892-43c2-4d95-be5f-b78c97ef06da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.356974401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.369276611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.370050325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.428479131Z" level=info msg="Created container f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f: kube-system/kube-proxy-rv4j7/kube-proxy" id=47761892-43c2-4d95-be5f-b78c97ef06da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.429326609Z" level=info msg="Starting container: f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f" id=341d5116-f1c8-41d2-bd73-ffa8b606a04b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.434984239Z" level=info msg="Started container" PID=2544 containerID=f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f description=kube-system/kube-proxy-rv4j7/kube-proxy id=341d5116-f1c8-41d2-bd73-ffa8b606a04b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc4731b177834525adf9140f49a4d6f3e4ffda8978ff81bfd2e49863829cdc83
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.415472089Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.426838509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.427000101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.427074186Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.441029187Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.441309581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.44140463Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.451962153Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.452122875Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.452200217Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461051411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461236651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461339847Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.469802543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.470468499Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f5198d16f3954       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   14 seconds ago       Running             kube-proxy                2                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	9bd33a96a682a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            2                   7956d5f4f017d       kube-scheduler-pause-585281            kube-system
	0f67a9dac80be       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   2                   3673971eb3a1e       kube-controller-manager-pause-585281   kube-system
	6a1c4f1c1aebd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      2                   5d4b7f88c5f51       etcd-pause-585281                      kube-system
	05f96289e87d5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            2                   674aaf7b40abf       kube-apiserver-pause-585281            kube-system
	a5d01500a7731       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               2                   a2e81392cf87b       kindnet-rvgcd                          kube-system
	2e09acaec05a4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   2                   44397206ad113       coredns-66bc5c9577-6644g               kube-system
	da0726e75d242       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Created             kube-proxy                1                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	5d52d78fc5243       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Exited              coredns                   1                   44397206ad113       coredns-66bc5c9577-6644g               kube-system
	bbd37a53b3ff9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            1                   674aaf7b40abf       kube-apiserver-pause-585281            kube-system
	cd79c121a8019       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               1                   a2e81392cf87b       kindnet-rvgcd                          kube-system
	3e717018e4db1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      1                   5d4b7f88c5f51       etcd-pause-585281                      kube-system
	f9491753f6ec7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   1                   3673971eb3a1e       kube-controller-manager-pause-585281   kube-system
	b3d2b33d28762       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            1                   7956d5f4f017d       kube-scheduler-pause-585281            kube-system
	3d3366a82a04a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago        Exited              kube-proxy                0                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	
	
	==> coredns [2e09acaec05a41e735e1fd9867f0a8dd659729440cdc0700cbf72474301bfae9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57220 - 14329 "HINFO IN 2572719623816400100.8068358031023892758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009521226s
	
	
	==> coredns [5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a] <==
	
	
	==> describe nodes <==
	Name:               pause-585281
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-585281
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-585281
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_08_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-585281
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:11:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-585281
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                54df2734-eac6-4be0-82ad-63063c5bfadc
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6644g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m54s
	  kube-system                 etcd-pause-585281                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m59s
	  kube-system                 kindnet-rvgcd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m54s
	  kube-system                 kube-apiserver-pause-585281             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-controller-manager-pause-585281    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-rv4j7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-scheduler-pause-585281             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m51s                kube-proxy       
	  Normal   Starting                 9s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  3m7s (x8 over 3m7s)  kubelet          Node pause-585281 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m7s (x8 over 3m7s)  kubelet          Node pause-585281 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m7s (x8 over 3m7s)  kubelet          Node pause-585281 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m59s                kubelet          Node pause-585281 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s                kubelet          Node pause-585281 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s                kubelet          Node pause-585281 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m59s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m55s                node-controller  Node pause-585281 event: Registered Node pause-585281 in Controller
	  Normal   NodeReady                2m11s                kubelet          Node pause-585281 status is now: NodeReady
	  Warning  ContainerGCFailed        59s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             26s (x7 over 87s)    kubelet          Node pause-585281 status is now: NodeNotReady
	  Normal   RegisteredNode           9s                   node-controller  Node pause-585281 event: Registered Node pause-585281 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:42] overlayfs: idmapped layers are currently not supported
	[  +3.260945] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:43] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:44] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:45] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:50] overlayfs: idmapped layers are currently not supported
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0] <==
	
	
	==> etcd [6a1c4f1c1aebd9dc507372057dbea49115539de3328fd5fc6cc5c24ab0cfa8bf] <==
	{"level":"warn","ts":"2025-11-08T10:11:20.207282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.220207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.281599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.293690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.316636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.352627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.368458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.410997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.429083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.475602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.509424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.524515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.555127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.585658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.637263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.697182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.698097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.716406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.749046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.786771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.826588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.865648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.901111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.933507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:21.071573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:11:36 up  2:54,  0 user,  load average: 3.77, 2.47, 2.07
	Linux pause-585281 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5d01500a77310a76b4659b56909c387796572bf5f8c6be88ba3a86442f8ee91] <==
	I1108 10:11:14.077642       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:11:14.077935       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:11:14.078194       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:11:14.078241       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:11:14.086131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:11:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:11:14.417823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:11:14.437053       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:11:14.438499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:11:14.441255       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:11:22.839352       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:11:22.839495       1 metrics.go:72] Registering metrics
	I1108 10:11:22.839596       1 controller.go:711] "Syncing nftables rules"
	I1108 10:11:24.414973       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:11:24.415108       1 main.go:301] handling current node
	I1108 10:11:34.414036       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:11:34.414078       1 main.go:301] handling current node
	
	
	==> kindnet [cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c] <==
	
	
	==> kube-apiserver [05f96289e87d54b856123e4c909df122cf5ea7cafa2a1ea3251fd71853a64ef1] <==
	I1108 10:11:22.437437       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 10:11:22.610418       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:11:22.610447       1 policy_source.go:240] refreshing policies
	I1108 10:11:22.633622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:11:22.649146       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:11:22.649386       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:11:22.649406       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:11:22.649524       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:11:22.649565       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:11:22.649601       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:11:22.649639       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:11:22.670784       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:11:22.670812       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:11:22.670820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:11:22.675287       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:11:22.703524       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:11:22.719405       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:11:22.719453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:11:22.719644       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:11:22.770894       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:11:22.803193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:11:22.803311       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:11:22.816884       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:11:23.273085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:11:24.810079       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1] <==
	
	
	==> kube-controller-manager [0f67a9dac80bea7b55941628cdf508d953ef6b68882be9639e87c90d34ad85c0] <==
	I1108 10:11:26.216609       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:11:26.220579       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:11:26.221050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:11:26.224094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:11:26.224546       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:11:26.228958       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:11:26.240597       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:11:26.241895       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:11:26.252440       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:11:26.256239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:11:26.263613       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:11:26.264163       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 10:11:26.265425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:11:26.265505       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:11:26.266807       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:11:26.266918       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:11:26.267026       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-585281"
	I1108 10:11:26.267090       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:11:26.267158       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:11:26.269328       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:11:26.291165       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:11:26.291267       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:11:26.291298       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:11:26.299639       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:11:26.303983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7] <==
	
	
	==> kube-proxy [3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78] <==
	I1108 10:08:43.878074       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:08:43.974591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:08:44.076513       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:08:44.076550       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:08:44.076650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:08:44.099970       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:08:44.100024       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:08:44.104510       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:08:44.104986       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:08:44.105048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:08:44.106445       1 config.go:200] "Starting service config controller"
	I1108 10:08:44.106515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:08:44.106564       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:08:44.106593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:08:44.106616       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:08:44.106620       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:08:44.107276       1 config.go:309] "Starting node config controller"
	I1108 10:08:44.107295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:08:44.107301       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:08:44.206702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:08:44.206717       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:08:44.206746       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b] <==
	
	
	==> kube-proxy [f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f] <==
	I1108 10:11:22.482168       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:11:24.030677       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:11:24.367109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:11:24.367224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:11:24.373867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:11:25.821592       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:11:25.833023       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:11:25.899394       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:11:25.899797       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:11:25.900015       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:11:25.901299       1 config.go:200] "Starting service config controller"
	I1108 10:11:25.901359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:11:25.901401       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:11:25.901427       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:11:25.901461       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:11:25.901487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:11:25.902187       1 config.go:309] "Starting node config controller"
	I1108 10:11:25.904748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:11:25.904811       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:11:26.002469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:11:26.002573       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:11:26.002601       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9bd33a96a682a6bf1f4bd44fbc1b47163722c9b2f12470e8668831c357f338c0] <==
	I1108 10:11:22.519122       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:11:25.846248       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:11:25.849286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:11:25.854800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:11:25.855193       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:11:25.855269       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:11:25.855319       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:11:25.865888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:11:25.869590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:11:25.869030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.869725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.955523       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:11:25.970659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.970787       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419] <==
	
	
	==> kubelet <==
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863346    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863632    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8721fdd20213e2c8efee2be82951653" pod="kube-system/kube-scheduler-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863883    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5679dbf1f61a74e3cb78cd593ed3ec9f" pod="kube-system/kube-apiserver-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864091    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0c36ac7bed6002836aed35c53aaf6af0" pod="kube-system/kube-controller-manager-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864331    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rv4j7\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81d952b7-1238-49d3-9e92-b4878ef4b207" pod="kube-system/kube-proxy-rv4j7"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864583    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rvgcd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.895506    1304 scope.go:117] "RemoveContainer" containerID="bbe481630744841ccaed0db9fce6bb52bc510b3db87e24ad66ea3eebe37bebe9"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.918957    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-6644g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a079b8a-6641-49c0-9045-67e660dfa443" pod="kube-system/coredns-66bc5c9577-6644g"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919247    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919489    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8721fdd20213e2c8efee2be82951653" pod="kube-system/kube-scheduler-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919703    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5679dbf1f61a74e3cb78cd593ed3ec9f" pod="kube-system/kube-apiserver-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919903    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0c36ac7bed6002836aed35c53aaf6af0" pod="kube-system/kube-controller-manager-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.920127    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rv4j7\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81d952b7-1238-49d3-9e92-b4878ef4b207" pod="kube-system/kube-proxy-rv4j7"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.920312    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rvgcd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.942572    1304 scope.go:117] "RemoveContainer" containerID="c0521ccb74deafa2ff1de55dcde0a8e896e81ca44ba308ff269d6f5a89c789ed"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.977935    1304 scope.go:117] "RemoveContainer" containerID="bb3bc1b7161e709f57eb7e833763492d5028f77a4eda96f6b8dc67a64c5adfc1"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.995768    1304 scope.go:117] "RemoveContainer" containerID="479d30e3e53d7f09520a9e0325d8e785b53081f9b1f26424b87e0c9430a03b2e"
	Nov 08 10:11:14 pause-585281 kubelet[1304]: I1108 10:11:14.014624    1304 scope.go:117] "RemoveContainer" containerID="f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226"
	Nov 08 10:11:21 pause-585281 kubelet[1304]: I1108 10:11:21.353175    1304 scope.go:117] "RemoveContainer" containerID="da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.402865    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-rvgcd\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.531634    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-6644g\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="7a079b8a-6641-49c0-9045-67e660dfa443" pod="kube-system/coredns-66bc5c9577-6644g"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.624754    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-585281\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:32 pause-585281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:11:32 pause-585281 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:11:32 pause-585281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-585281 -n pause-585281
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-585281 -n pause-585281: exit status 2 (459.563471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-585281 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-585281
helpers_test.go:243: (dbg) docker inspect pause-585281:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf",
	        "Created": "2025-11-08T10:08:09.247240488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:08:09.316225424Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/hosts",
	        "LogPath": "/var/lib/docker/containers/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf/5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf-json.log",
	        "Name": "/pause-585281",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-585281:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-585281",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5222e74c7831fc0454304f3cfe1119d55c3a623a3eee28d631bd4fa0ed1f87bf",
	                "LowerDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a1535de8ee1e8b8f9eff1329356ce21eaa6a73d2235448a4ad8b4f54d9e9cc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-585281",
	                "Source": "/var/lib/docker/volumes/pause-585281/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-585281",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-585281",
	                "name.minikube.sigs.k8s.io": "pause-585281",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73886e444b8a2b0313385b030dfeae4a204f5068c0997ceba71405a4e4409596",
	            "SandboxKey": "/var/run/docker/netns/73886e444b8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-585281": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:60:1d:21:c7:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b3f2e47b845c3ff917bce851c0bd47d7afec62b040cb09ff0c6d64329a932166",
	                    "EndpointID": "343f37a67d1f795e3528e52f33450bacf315a13d56f10436ee6ea1936d6c5361",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-585281",
	                        "5222e74c7831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-585281 -n pause-585281
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-585281 -n pause-585281: exit status 2 (445.69206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-585281 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-585281 logs -n 25: (1.742089511s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-099098 sudo systemctl cat kubelet --no-pager                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status docker --all --full --no-pager                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat docker --no-pager                                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/docker/daemon.json                                                          │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo docker system info                                                                   │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cri-dockerd --version                                                                │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat containerd --no-pager                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/containerd/config.toml                                                      │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo containerd config dump                                                               │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status crio --all --full --no-pager                                        │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat crio --no-pager                                                        │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo crio config                                                                          │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                     │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:11:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:11:00.406326  461087 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:11:00.406488  461087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:00.406494  461087 out.go:374] Setting ErrFile to fd 2...
	I1108 10:11:00.406500  461087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:11:00.406803  461087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:11:00.407274  461087 out.go:368] Setting JSON to false
	I1108 10:11:00.408277  461087 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10410,"bootTime":1762586251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:11:00.408373  461087 start.go:143] virtualization:  
	I1108 10:11:00.418369  461087 out.go:179] * [force-systemd-env-000082] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:11:00.422598  461087 notify.go:221] Checking for updates...
	I1108 10:11:00.422594  461087 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:11:00.437375  461087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:11:00.440665  461087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:11:00.443861  461087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:11:00.446999  461087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:11:00.450064  461087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1108 10:11:00.454705  461087 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:00.454944  461087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:11:00.480333  461087 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:11:00.480553  461087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:11:00.546813  461087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:11:00.536445799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:11:00.546933  461087 docker.go:319] overlay module found
	I1108 10:11:00.550385  461087 out.go:179] * Using the docker driver based on user configuration
	I1108 10:11:00.553277  461087 start.go:309] selected driver: docker
	I1108 10:11:00.553311  461087 start.go:930] validating driver "docker" against <nil>
	I1108 10:11:00.553326  461087 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:11:00.554174  461087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:11:00.625624  461087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:11:00.615875633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:11:00.625781  461087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:11:00.626009  461087 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 10:11:00.629022  461087 out.go:179] * Using Docker driver with root privileges
	I1108 10:11:00.631812  461087 cni.go:84] Creating CNI manager for ""
	I1108 10:11:00.631873  461087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:00.631889  461087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:11:00.631965  461087 start.go:353] cluster config:
	{Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:00.635028  461087 out.go:179] * Starting "force-systemd-env-000082" primary control-plane node in "force-systemd-env-000082" cluster
	I1108 10:11:00.637777  461087 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:11:00.640603  461087 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:11:00.643430  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:00.643481  461087 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:11:00.643493  461087 cache.go:59] Caching tarball of preloaded images
	I1108 10:11:00.643522  461087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:11:00.643595  461087 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:11:00.643605  461087 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:11:00.643713  461087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json ...
	I1108 10:11:00.643731  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json: {Name:mkcf8aede9b26585f077e4eebfc7536476dbedc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:00.663385  461087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:11:00.663407  461087 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:11:00.663427  461087 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:11:00.663452  461087 start.go:360] acquireMachinesLock for force-systemd-env-000082: {Name:mk9ade79be79f220f84147f63436e59c2fb21cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:11:00.663577  461087 start.go:364] duration metric: took 108.8µs to acquireMachinesLock for "force-systemd-env-000082"
	I1108 10:11:00.663604  461087 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:11:00.663676  461087 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:11:00.667124  461087 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:11:00.667378  461087 start.go:159] libmachine.API.Create for "force-systemd-env-000082" (driver="docker")
	I1108 10:11:00.667412  461087 client.go:173] LocalClient.Create starting
	I1108 10:11:00.667494  461087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:11:00.667538  461087 main.go:143] libmachine: Decoding PEM data...
	I1108 10:11:00.667554  461087 main.go:143] libmachine: Parsing certificate...
	I1108 10:11:00.667637  461087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:11:00.667654  461087 main.go:143] libmachine: Decoding PEM data...
	I1108 10:11:00.667663  461087 main.go:143] libmachine: Parsing certificate...
	I1108 10:11:00.668055  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:11:00.682966  461087 cli_runner.go:211] docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:11:00.683049  461087 network_create.go:284] running [docker network inspect force-systemd-env-000082] to gather additional debugging logs...
	I1108 10:11:00.683066  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082
	W1108 10:11:00.699497  461087 cli_runner.go:211] docker network inspect force-systemd-env-000082 returned with exit code 1
	I1108 10:11:00.699527  461087 network_create.go:287] error running [docker network inspect force-systemd-env-000082]: docker network inspect force-systemd-env-000082: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-000082 not found
	I1108 10:11:00.699542  461087 network_create.go:289] output of [docker network inspect force-systemd-env-000082]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-000082 not found
	
	** /stderr **
	I1108 10:11:00.699654  461087 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:00.714586  461087 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:11:00.714979  461087 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:11:00.715353  461087 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:11:00.715640  461087 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b3f2e47b845c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:91:00:2f:ef:e8} reservation:<nil>}
	I1108 10:11:00.716040  461087 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018e1f30}
	I1108 10:11:00.716071  461087 network_create.go:124] attempt to create docker network force-systemd-env-000082 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:11:00.716134  461087 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-000082 force-systemd-env-000082
	I1108 10:11:00.777285  461087 network_create.go:108] docker network force-systemd-env-000082 192.168.85.0/24 created
	I1108 10:11:00.777328  461087 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-000082" container
	I1108 10:11:00.777417  461087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:11:00.794186  461087 cli_runner.go:164] Run: docker volume create force-systemd-env-000082 --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:11:00.812081  461087 oci.go:103] Successfully created a docker volume force-systemd-env-000082
	I1108 10:11:00.812177  461087 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-000082-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --entrypoint /usr/bin/test -v force-systemd-env-000082:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:11:01.352554  461087 oci.go:107] Successfully prepared a docker volume force-systemd-env-000082
	I1108 10:11:01.352602  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:01.352623  461087 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:11:01.352687  461087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-000082:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:11:07.669085  455293 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.422075994s)
	I1108 10:11:07.669109  455293 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:11:07.669181  455293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:11:07.673548  455293 start.go:564] Will wait 60s for crictl version
	I1108 10:11:07.673616  455293 ssh_runner.go:195] Run: which crictl
	I1108 10:11:07.677374  455293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:11:07.719347  455293 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:11:07.719455  455293 ssh_runner.go:195] Run: crio --version
	I1108 10:11:07.761133  455293 ssh_runner.go:195] Run: crio --version
	I1108 10:11:07.805662  455293 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:11:07.809447  455293 cli_runner.go:164] Run: docker network inspect pause-585281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:07.831439  455293 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:11:07.835898  455293 kubeadm.go:884] updating cluster {Name:pause-585281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:11:07.836060  455293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:07.836115  455293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:07.898839  455293 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:07.898866  455293 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:11:07.898922  455293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:07.935776  455293 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:07.935795  455293 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:11:07.935803  455293 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:11:07.935903  455293 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-585281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:11:07.935981  455293 ssh_runner.go:195] Run: crio config
	I1108 10:11:08.020485  455293 cni.go:84] Creating CNI manager for ""
	I1108 10:11:08.020562  455293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:08.020603  455293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:11:08.020655  455293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-585281 NodeName:pause-585281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:11:08.020830  455293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-585281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:11:08.020970  455293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:11:08.032570  455293 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:11:08.032650  455293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:11:08.042201  455293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 10:11:08.056956  455293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:11:08.073469  455293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 10:11:08.089678  455293 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:11:08.094342  455293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:08.257824  455293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:08.276280  455293 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281 for IP: 192.168.76.2
	I1108 10:11:08.276314  455293 certs.go:195] generating shared ca certs ...
	I1108 10:11:08.276348  455293 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:08.276535  455293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:11:08.276606  455293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:11:08.276620  455293 certs.go:257] generating profile certs ...
	I1108 10:11:08.276744  455293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key
	I1108 10:11:08.276834  455293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.key.9382e487
	I1108 10:11:08.276882  455293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.key
	I1108 10:11:08.277027  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:11:08.277062  455293 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:11:08.277075  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:11:08.277098  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:11:08.277122  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:11:08.277152  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:11:08.277204  455293 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:08.277832  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:11:08.300354  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:11:08.319515  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:11:08.338594  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:11:08.357750  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 10:11:08.375430  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:11:08.396316  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:11:08.413963  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:11:08.431284  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:11:08.449747  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:11:08.468312  455293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:11:08.485945  455293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:11:08.499042  455293 ssh_runner.go:195] Run: openssl version
	I1108 10:11:08.505560  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:11:08.514556  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.518373  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.518477  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:08.561151  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:11:08.569363  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:11:08.577990  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.581925  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.581990  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:11:08.623118  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:11:08.631412  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:11:08.639954  455293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.643875  455293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.643953  455293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:11:08.685452  455293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:11:08.693820  455293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:11:08.697813  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:11:08.741957  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:11:08.783192  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:11:08.824380  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:11:08.865441  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:11:08.906862  455293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:11:08.948313  455293 kubeadm.go:401] StartCluster: {Name:pause-585281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-585281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:08.948437  455293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:11:08.948505  455293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:11:08.976261  455293 cri.go:89] found id: "da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	I1108 10:11:08.976284  455293 cri.go:89] found id: "5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a"
	I1108 10:11:08.976289  455293 cri.go:89] found id: "bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1"
	I1108 10:11:08.976293  455293 cri.go:89] found id: "cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c"
	I1108 10:11:08.976296  455293 cri.go:89] found id: "3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0"
	I1108 10:11:08.976299  455293 cri.go:89] found id: "f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7"
	I1108 10:11:08.976302  455293 cri.go:89] found id: "b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419"
	I1108 10:11:08.976305  455293 cri.go:89] found id: "d480e4d9a291fe52ec9ea2c2b32ab9c33154b183a934ae4982f262482e10f6b2"
	I1108 10:11:08.976308  455293 cri.go:89] found id: "f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226"
	I1108 10:11:08.976316  455293 cri.go:89] found id: "3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78"
	I1108 10:11:08.976319  455293 cri.go:89] found id: "bb3bc1b7161e709f57eb7e833763492d5028f77a4eda96f6b8dc67a64c5adfc1"
	I1108 10:11:08.976322  455293 cri.go:89] found id: "bbe481630744841ccaed0db9fce6bb52bc510b3db87e24ad66ea3eebe37bebe9"
	I1108 10:11:08.976325  455293 cri.go:89] found id: "479d30e3e53d7f09520a9e0325d8e785b53081f9b1f26424b87e0c9430a03b2e"
	I1108 10:11:08.976329  455293 cri.go:89] found id: "c0521ccb74deafa2ff1de55dcde0a8e896e81ca44ba308ff269d6f5a89c789ed"
	I1108 10:11:08.976332  455293 cri.go:89] found id: ""
	I1108 10:11:08.976381  455293 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:11:08.987361  455293 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:11:08Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:11:08.987461  455293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:11:08.995180  455293 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:11:08.995250  455293 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:11:08.995347  455293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:11:09.002917  455293 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:11:09.003554  455293 kubeconfig.go:125] found "pause-585281" server: "https://192.168.76.2:8443"
	I1108 10:11:09.004304  455293 kapi.go:59] client config for pause-585281: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key", CAFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:11:09.004848  455293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 10:11:09.004862  455293 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 10:11:09.004867  455293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 10:11:09.004872  455293 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 10:11:09.004876  455293 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 10:11:09.005320  455293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:11:09.015119  455293 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:11:09.015203  455293 kubeadm.go:602] duration metric: took 19.941505ms to restartPrimaryControlPlane
	I1108 10:11:09.015220  455293 kubeadm.go:403] duration metric: took 66.917587ms to StartCluster
	I1108 10:11:09.015237  455293 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:09.015322  455293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:11:09.015921  455293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:09.016183  455293 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:11:09.016577  455293 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:09.016568  455293 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:11:09.021733  455293 out.go:179] * Verifying Kubernetes components...
	I1108 10:11:09.021896  455293 out.go:179] * Enabled addons: 
	I1108 10:11:05.782843  461087 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-000082:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.430114299s)
	I1108 10:11:05.782875  461087 kic.go:203] duration metric: took 4.430250176s to extract preloaded images to volume ...
	W1108 10:11:05.783020  461087 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:11:05.783164  461087 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:11:05.833440  461087 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-000082 --name force-systemd-env-000082 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-000082 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-000082 --network force-systemd-env-000082 --ip 192.168.85.2 --volume force-systemd-env-000082:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:11:06.168131  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Running}}
	I1108 10:11:06.193255  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.218300  461087 cli_runner.go:164] Run: docker exec force-systemd-env-000082 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:11:06.288768  461087 oci.go:144] the created container "force-systemd-env-000082" has a running status.
	I1108 10:11:06.288794  461087 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa...
	I1108 10:11:06.636622  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1108 10:11:06.636673  461087 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:11:06.666118  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.684032  461087 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:11:06.684052  461087 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-000082 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:11:06.725380  461087 cli_runner.go:164] Run: docker container inspect force-systemd-env-000082 --format={{.State.Status}}
	I1108 10:11:06.745548  461087 machine.go:94] provisionDockerMachine start ...
	I1108 10:11:06.745641  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:06.764979  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:06.765319  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:06.765335  461087 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:11:06.765987  461087 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:11:09.924752  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-000082
	
	I1108 10:11:09.924782  461087 ubuntu.go:182] provisioning hostname "force-systemd-env-000082"
	I1108 10:11:09.924893  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:09.942938  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:09.943242  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:09.943261  461087 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-000082 && echo "force-systemd-env-000082" | sudo tee /etc/hostname
	I1108 10:11:10.111163  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-000082
	
	I1108 10:11:10.111249  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:10.129436  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:10.129765  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:10.129789  461087 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-000082' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-000082/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-000082' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:11:10.285277  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:11:10.285361  461087 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:11:10.285411  461087 ubuntu.go:190] setting up certificates
	I1108 10:11:10.285439  461087 provision.go:84] configureAuth start
	I1108 10:11:10.285525  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:10.314006  461087 provision.go:143] copyHostCerts
	I1108 10:11:10.314054  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:11:10.314091  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:11:10.314103  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:11:10.314184  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:11:10.314284  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:11:10.314306  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:11:10.314311  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:11:10.314350  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:11:10.314405  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:11:10.314426  461087 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:11:10.314436  461087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:11:10.314461  461087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:11:10.314514  461087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-000082 san=[127.0.0.1 192.168.85.2 force-systemd-env-000082 localhost minikube]
	I1108 10:11:09.024612  455293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:09.024785  455293 addons.go:515] duration metric: took 8.212158ms for enable addons: enabled=[]
	I1108 10:11:09.166948  455293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:09.181644  455293 node_ready.go:35] waiting up to 6m0s for node "pause-585281" to be "Ready" ...
	W1108 10:11:11.182982  455293 node_ready.go:55] error getting node "pause-585281" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-585281": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:11:10.913549  461087 provision.go:177] copyRemoteCerts
	I1108 10:11:10.913641  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:11:10.913681  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:10.931686  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.040579  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1108 10:11:11.040679  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:11:11.058691  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1108 10:11:11.058758  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 10:11:11.077462  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1108 10:11:11.077583  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:11:11.096772  461087 provision.go:87] duration metric: took 811.295957ms to configureAuth
	I1108 10:11:11.096801  461087 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:11:11.097131  461087 config.go:182] Loaded profile config "force-systemd-env-000082": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:11:11.097280  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.115517  461087 main.go:143] libmachine: Using SSH client type: native
	I1108 10:11:11.115841  461087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1108 10:11:11.115863  461087 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:11:11.379381  461087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:11:11.379405  461087 machine.go:97] duration metric: took 4.633836912s to provisionDockerMachine
	I1108 10:11:11.379415  461087 client.go:176] duration metric: took 10.711996117s to LocalClient.Create
	I1108 10:11:11.379429  461087 start.go:167] duration metric: took 10.71205256s to libmachine.API.Create "force-systemd-env-000082"
	I1108 10:11:11.379451  461087 start.go:293] postStartSetup for "force-systemd-env-000082" (driver="docker")
	I1108 10:11:11.379465  461087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:11:11.379526  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:11:11.379566  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.396097  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.501247  461087 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:11:11.504766  461087 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:11:11.504803  461087 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:11:11.504829  461087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:11:11.504927  461087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:11:11.505017  461087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:11:11.505030  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> /etc/ssl/certs/2940852.pem
	I1108 10:11:11.505145  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:11:11.512575  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:11.530148  461087 start.go:296] duration metric: took 150.661795ms for postStartSetup
	I1108 10:11:11.530608  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:11.550697  461087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/config.json ...
	I1108 10:11:11.550982  461087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:11:11.551042  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.567632  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.669953  461087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:11:11.674599  461087 start.go:128] duration metric: took 11.010907643s to createHost
	I1108 10:11:11.674625  461087 start.go:83] releasing machines lock for "force-systemd-env-000082", held for 11.011038548s
	I1108 10:11:11.674697  461087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-000082
	I1108 10:11:11.692356  461087 ssh_runner.go:195] Run: cat /version.json
	I1108 10:11:11.692417  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.692724  461087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:11:11.692829  461087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-000082
	I1108 10:11:11.709623  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.711506  461087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/force-systemd-env-000082/id_rsa Username:docker}
	I1108 10:11:11.812634  461087 ssh_runner.go:195] Run: systemctl --version
	I1108 10:11:11.934735  461087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:11:11.971210  461087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:11:11.975568  461087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:11:11.975685  461087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:11:12.006233  461087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:11:12.006264  461087 start.go:496] detecting cgroup driver to use...
	I1108 10:11:12.006283  461087 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1108 10:11:12.006354  461087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:11:12.025481  461087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:11:12.038801  461087 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:11:12.038872  461087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:11:12.055059  461087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:11:12.074011  461087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:11:12.185445  461087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:11:12.301339  461087 docker.go:234] disabling docker service ...
	I1108 10:11:12.301477  461087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:11:12.325427  461087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:11:12.339708  461087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:11:12.454839  461087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:11:12.574289  461087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:11:12.587706  461087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:11:12.602901  461087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:11:12.602969  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.612825  461087 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 10:11:12.612893  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.622569  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.631687  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.640997  461087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:11:12.649046  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.657995  461087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.672011  461087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:11:12.681676  461087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:11:12.689968  461087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:11:12.697628  461087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:12.828470  461087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:11:12.960357  461087 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:11:12.960430  461087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:11:12.964839  461087 start.go:564] Will wait 60s for crictl version
	I1108 10:11:12.965116  461087 ssh_runner.go:195] Run: which crictl
	I1108 10:11:12.969245  461087 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:11:12.999883  461087 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:11:13.000021  461087 ssh_runner.go:195] Run: crio --version
	I1108 10:11:13.030652  461087 ssh_runner.go:195] Run: crio --version
	I1108 10:11:13.064905  461087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:11:13.067935  461087 cli_runner.go:164] Run: docker network inspect force-systemd-env-000082 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:11:13.084631  461087 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:11:13.088857  461087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:11:13.098866  461087 kubeadm.go:884] updating cluster {Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:11:13.098978  461087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:11:13.099032  461087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:13.132542  461087 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:13.132568  461087 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:11:13.132638  461087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:11:13.160282  461087 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:11:13.160313  461087 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:11:13.160322  461087 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:11:13.160416  461087 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-000082 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:11:13.160514  461087 ssh_runner.go:195] Run: crio config
	I1108 10:11:13.229145  461087 cni.go:84] Creating CNI manager for ""
	I1108 10:11:13.229169  461087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:11:13.229221  461087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:11:13.229252  461087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-000082 NodeName:force-systemd-env-000082 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:11:13.229426  461087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-000082"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:11:13.229502  461087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:11:13.237396  461087 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:11:13.237486  461087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:11:13.245155  461087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1108 10:11:13.257964  461087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:11:13.271455  461087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1108 10:11:13.284809  461087 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:11:13.288678  461087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:11:13.298531  461087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:11:13.410061  461087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:11:13.427024  461087 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082 for IP: 192.168.85.2
	I1108 10:11:13.427047  461087 certs.go:195] generating shared ca certs ...
	I1108 10:11:13.427063  461087 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:13.427231  461087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:11:13.427290  461087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:11:13.427303  461087 certs.go:257] generating profile certs ...
	I1108 10:11:13.427379  461087 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key
	I1108 10:11:13.427407  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt with IP's: []
	I1108 10:11:14.886388  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt ...
	I1108 10:11:14.886472  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.crt: {Name:mk7f289941456bbbd39d15b5d1963e1264c2c34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:14.886713  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key ...
	I1108 10:11:14.886755  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/client.key: {Name:mk683f7ffff9f8c8a187abefa434a8f8bdceb939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:14.886901  461087 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85
	I1108 10:11:14.886950  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:11:15.443000  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 ...
	I1108 10:11:15.443073  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85: {Name:mk0d01ffe0e7573b870f75d8ad0164e2457a62ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.443305  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85 ...
	I1108 10:11:15.443347  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85: {Name:mkd2d89b1a6b561c7b22e7a3941d813e824c1c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.443490  461087 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt.32f5ab85 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt
	I1108 10:11:15.443609  461087 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key.32f5ab85 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key
	I1108 10:11:15.443717  461087 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key
	I1108 10:11:15.443766  461087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt with IP's: []
	I1108 10:11:15.526769  461087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt ...
	I1108 10:11:15.526799  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt: {Name:mkd285793896fa812cc3f297cc97019a110d8562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.526968  461087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key ...
	I1108 10:11:15.526977  461087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key: {Name:mka6d03fc97e946d4827f6237d0e1f5b50945bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:11:15.527048  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1108 10:11:15.527067  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1108 10:11:15.527079  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1108 10:11:15.527090  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1108 10:11:15.527101  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1108 10:11:15.527112  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1108 10:11:15.527124  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1108 10:11:15.527135  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1108 10:11:15.527184  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:11:15.527218  461087 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:11:15.527226  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:11:15.527249  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:11:15.527273  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:11:15.527294  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:11:15.527335  461087 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:11:15.527368  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:15.527380  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem -> /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.527391  461087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> /usr/share/ca-certificates/2940852.pem
	I1108 10:11:15.527913  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:11:15.562864  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:11:15.598764  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:11:15.630728  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:11:15.670278  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1108 10:11:15.694941  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:11:15.722936  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:11:15.758742  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/force-systemd-env-000082/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:11:15.790979  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:11:15.824566  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:11:15.860753  461087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:11:15.892651  461087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:11:15.907089  461087 ssh_runner.go:195] Run: openssl version
	I1108 10:11:15.913777  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:11:15.922992  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.927159  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.927241  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:11:15.988583  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:11:15.998184  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:11:16.018199  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.023319  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.023396  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:11:16.099563  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:11:16.112748  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:11:16.122525  461087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.126762  461087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.126846  461087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:11:16.169756  461087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:11:16.178943  461087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:11:16.183395  461087 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:11:16.183456  461087 kubeadm.go:401] StartCluster: {Name:force-systemd-env-000082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-000082 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:11:16.183531  461087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:11:16.183592  461087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:11:16.221185  461087 cri.go:89] found id: ""
	I1108 10:11:16.221277  461087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:11:16.236076  461087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:11:16.250038  461087 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:11:16.250115  461087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:11:16.262461  461087 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:11:16.262482  461087 kubeadm.go:158] found existing configuration files:
	
	I1108 10:11:16.262541  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:11:16.279112  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:11:16.279184  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:11:16.293475  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:11:16.302520  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:11:16.302592  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:11:16.310230  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:11:16.322393  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:11:16.322468  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:11:16.332743  461087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:11:16.341194  461087 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:11:16.341277  461087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:11:16.354246  461087 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:11:16.441249  461087 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:11:16.441358  461087 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:11:16.485431  461087 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:11:16.485518  461087 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:11:16.485570  461087 kubeadm.go:319] OS: Linux
	I1108 10:11:16.485622  461087 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:11:16.485694  461087 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:11:16.485761  461087 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:11:16.485826  461087 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:11:16.485896  461087 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:11:16.485962  461087 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:11:16.486028  461087 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:11:16.486095  461087 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:11:16.486158  461087 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:11:16.622249  461087 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:11:16.622372  461087 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:11:16.622474  461087 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:11:16.636080  461087 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:11:13.183027  455293 node_ready.go:55] error getting node "pause-585281" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-585281": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:11:16.642337  461087 out.go:252]   - Generating certificates and keys ...
	I1108 10:11:16.642447  461087 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:11:16.642523  461087 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:11:17.231203  461087 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:11:18.220482  461087 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:11:19.189234  461087 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:11:19.639770  461087 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:11:20.256254  461087 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:11:20.256787  461087 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-000082 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:11:20.685366  461087 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:11:20.685872  461087 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-000082 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:11:21.427075  461087 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:11:22.824563  461087 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:11:23.988868  461087 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:11:23.989472  461087 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:11:24.853251  461087 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:11:22.671439  455293 node_ready.go:49] node "pause-585281" is "Ready"
	I1108 10:11:22.671469  455293 node_ready.go:38] duration metric: took 13.489787634s for node "pause-585281" to be "Ready" ...
	I1108 10:11:22.671482  455293 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:11:22.671540  455293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:11:22.693535  455293 api_server.go:72] duration metric: took 13.677316944s to wait for apiserver process to appear ...
	I1108 10:11:22.693558  455293 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:11:22.693579  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:22.747124  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:22.747209  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:23.193697  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:23.240203  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:23.240292  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:23.693686  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:23.709710  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:23.709741  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:24.193982  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:24.230486  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:11:24.230574  455293 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:11:24.694063  455293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:11:24.721902  455293 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:11:24.726028  455293 api_server.go:141] control plane version: v1.34.1
	I1108 10:11:24.726106  455293 api_server.go:131] duration metric: took 2.032539244s to wait for apiserver health ...
	I1108 10:11:24.726138  455293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:11:24.734146  455293 system_pods.go:59] 7 kube-system pods found
	I1108 10:11:24.734231  455293 system_pods.go:61] "coredns-66bc5c9577-6644g" [7a079b8a-6641-49c0-9045-67e660dfa443] Running
	I1108 10:11:24.734257  455293 system_pods.go:61] "etcd-pause-585281" [e75af0d5-3d47-45d7-8cc3-179065325573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:11:24.734280  455293 system_pods.go:61] "kindnet-rvgcd" [66cb6e5e-6ba4-4952-85dc-37b05e46b000] Running
	I1108 10:11:24.734321  455293 system_pods.go:61] "kube-apiserver-pause-585281" [e2a58e2e-8d62-413d-9269-873c844d5b6c] Running
	I1108 10:11:24.734341  455293 system_pods.go:61] "kube-controller-manager-pause-585281" [571f0020-7fbb-4bb9-bbdc-fd0fb7735d17] Running
	I1108 10:11:24.734364  455293 system_pods.go:61] "kube-proxy-rv4j7" [81d952b7-1238-49d3-9e92-b4878ef4b207] Running
	I1108 10:11:24.734395  455293 system_pods.go:61] "kube-scheduler-pause-585281" [f343c463-edb4-432b-b1e6-1e9b1b4f1eed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:11:24.734421  455293 system_pods.go:74] duration metric: took 8.263466ms to wait for pod list to return data ...
	I1108 10:11:24.734444  455293 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:11:24.740290  455293 default_sa.go:45] found service account: "default"
	I1108 10:11:24.740349  455293 default_sa.go:55] duration metric: took 5.874633ms for default service account to be created ...
	I1108 10:11:24.740381  455293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:11:24.746824  455293 system_pods.go:86] 7 kube-system pods found
	I1108 10:11:24.746900  455293 system_pods.go:89] "coredns-66bc5c9577-6644g" [7a079b8a-6641-49c0-9045-67e660dfa443] Running
	I1108 10:11:24.746923  455293 system_pods.go:89] "etcd-pause-585281" [e75af0d5-3d47-45d7-8cc3-179065325573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:11:24.746942  455293 system_pods.go:89] "kindnet-rvgcd" [66cb6e5e-6ba4-4952-85dc-37b05e46b000] Running
	I1108 10:11:24.746974  455293 system_pods.go:89] "kube-apiserver-pause-585281" [e2a58e2e-8d62-413d-9269-873c844d5b6c] Running
	I1108 10:11:24.747001  455293 system_pods.go:89] "kube-controller-manager-pause-585281" [571f0020-7fbb-4bb9-bbdc-fd0fb7735d17] Running
	I1108 10:11:24.747024  455293 system_pods.go:89] "kube-proxy-rv4j7" [81d952b7-1238-49d3-9e92-b4878ef4b207] Running
	I1108 10:11:24.747057  455293 system_pods.go:89] "kube-scheduler-pause-585281" [f343c463-edb4-432b-b1e6-1e9b1b4f1eed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:11:24.747088  455293 system_pods.go:126] duration metric: took 6.686705ms to wait for k8s-apps to be running ...
	I1108 10:11:24.747112  455293 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:11:24.747195  455293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:11:24.870184  455293 system_svc.go:56] duration metric: took 123.06236ms WaitForService to wait for kubelet
	I1108 10:11:24.870260  455293 kubeadm.go:587] duration metric: took 15.854045048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:11:24.870300  455293 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:11:24.880868  455293 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:11:24.880963  455293 node_conditions.go:123] node cpu capacity is 2
	I1108 10:11:24.880992  455293 node_conditions.go:105] duration metric: took 10.669113ms to run NodePressure ...
	I1108 10:11:24.881018  455293 start.go:242] waiting for startup goroutines ...
	I1108 10:11:24.881055  455293 start.go:247] waiting for cluster config update ...
	I1108 10:11:24.881079  455293 start.go:256] writing updated cluster config ...
	I1108 10:11:24.881456  455293 ssh_runner.go:195] Run: rm -f paused
	I1108 10:11:24.893697  455293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:11:24.894328  455293 kapi.go:59] client config for pause-585281: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key", CAFile:"/home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:11:24.899661  455293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6644g" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:24.913961  455293 pod_ready.go:94] pod "coredns-66bc5c9577-6644g" is "Ready"
	I1108 10:11:24.914036  455293 pod_ready.go:86] duration metric: took 14.30372ms for pod "coredns-66bc5c9577-6644g" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:24.918554  455293 pod_ready.go:83] waiting for pod "etcd-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:11:26.927782  455293 pod_ready.go:104] pod "etcd-pause-585281" is not "Ready", error: <nil>
	I1108 10:11:25.481258  461087 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:11:27.408297  461087 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:11:28.452119  461087 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:11:28.798002  461087 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:11:28.798585  461087 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:11:28.803700  461087 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:11:28.807166  461087 out.go:252]   - Booting up control plane ...
	I1108 10:11:28.807297  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:11:28.807379  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:11:28.807463  461087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:11:28.827083  461087 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:11:28.827356  461087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:11:28.834934  461087 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:11:28.835260  461087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:11:28.835548  461087 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:11:28.970331  461087 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:11:28.970455  461087 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:11:29.973338  461087 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000817749s
	I1108 10:11:29.974647  461087 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:11:29.974746  461087 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1108 10:11:29.974846  461087 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:11:29.974937  461087 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:11:29.425097  455293 pod_ready.go:104] pod "etcd-pause-585281" is not "Ready", error: <nil>
	I1108 10:11:30.924468  455293 pod_ready.go:94] pod "etcd-pause-585281" is "Ready"
	I1108 10:11:30.924494  455293 pod_ready.go:86] duration metric: took 6.005867883s for pod "etcd-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.929673  455293 pod_ready.go:83] waiting for pod "kube-apiserver-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.934432  455293 pod_ready.go:94] pod "kube-apiserver-pause-585281" is "Ready"
	I1108 10:11:30.934455  455293 pod_ready.go:86] duration metric: took 4.75965ms for pod "kube-apiserver-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.939412  455293 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.944771  455293 pod_ready.go:94] pod "kube-controller-manager-pause-585281" is "Ready"
	I1108 10:11:30.944846  455293 pod_ready.go:86] duration metric: took 5.400941ms for pod "kube-controller-manager-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:30.947350  455293 pod_ready.go:83] waiting for pod "kube-proxy-rv4j7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.123116  455293 pod_ready.go:94] pod "kube-proxy-rv4j7" is "Ready"
	I1108 10:11:31.123195  455293 pod_ready.go:86] duration metric: took 175.771915ms for pod "kube-proxy-rv4j7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.322617  455293 pod_ready.go:83] waiting for pod "kube-scheduler-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.722018  455293 pod_ready.go:94] pod "kube-scheduler-pause-585281" is "Ready"
	I1108 10:11:31.722093  455293 pod_ready.go:86] duration metric: took 399.399591ms for pod "kube-scheduler-pause-585281" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:11:31.722121  455293 pod_ready.go:40] duration metric: took 6.828342603s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:11:31.831158  455293 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:11:31.834395  455293 out.go:179] * Done! kubectl is now configured to use "pause-585281" cluster and "default" namespace by default
	I1108 10:11:32.665264  461087 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.68894871s
	
	
	==> CRI-O <==
	Nov 08 10:11:14 pause-585281 crio[2155]: time="2025-11-08T10:11:14.041319447Z" level=info msg="Removed container f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226: kube-system/kindnet-rvgcd/kindnet-cni" id=6964c22b-aef3-43df-a9a1-79d0643128a6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.354173036Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd8884c0-2b18-4bae-9c8d-5586ec888a97 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.355594219Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3daa9f7-f586-427c-97d2-18f88506f0e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.3567696Z" level=info msg="Creating container: kube-system/kube-proxy-rv4j7/kube-proxy" id=47761892-43c2-4d95-be5f-b78c97ef06da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.356974401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.369276611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.370050325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.428479131Z" level=info msg="Created container f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f: kube-system/kube-proxy-rv4j7/kube-proxy" id=47761892-43c2-4d95-be5f-b78c97ef06da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.429326609Z" level=info msg="Starting container: f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f" id=341d5116-f1c8-41d2-bd73-ffa8b606a04b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:11:21 pause-585281 crio[2155]: time="2025-11-08T10:11:21.434984239Z" level=info msg="Started container" PID=2544 containerID=f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f description=kube-system/kube-proxy-rv4j7/kube-proxy id=341d5116-f1c8-41d2-bd73-ffa8b606a04b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc4731b177834525adf9140f49a4d6f3e4ffda8978ff81bfd2e49863829cdc83
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.415472089Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.426838509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.427000101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.427074186Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.441029187Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.441309581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.44140463Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.451962153Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.452122875Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.452200217Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461051411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461236651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.461339847Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.469802543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:11:24 pause-585281 crio[2155]: time="2025-11-08T10:11:24.470468499Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f5198d16f3954       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   17 seconds ago      Running             kube-proxy                2                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	9bd33a96a682a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago      Running             kube-scheduler            2                   7956d5f4f017d       kube-scheduler-pause-585281            kube-system
	0f67a9dac80be       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago      Running             kube-controller-manager   2                   3673971eb3a1e       kube-controller-manager-pause-585281   kube-system
	6a1c4f1c1aebd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago      Running             etcd                      2                   5d4b7f88c5f51       etcd-pause-585281                      kube-system
	05f96289e87d5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago      Running             kube-apiserver            2                   674aaf7b40abf       kube-apiserver-pause-585281            kube-system
	a5d01500a7731       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago      Running             kindnet-cni               2                   a2e81392cf87b       kindnet-rvgcd                          kube-system
	2e09acaec05a4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   27 seconds ago      Running             coredns                   2                   44397206ad113       coredns-66bc5c9577-6644g               kube-system
	da0726e75d242       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Created             kube-proxy                1                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	5d52d78fc5243       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Exited              coredns                   1                   44397206ad113       coredns-66bc5c9577-6644g               kube-system
	bbd37a53b3ff9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago       Exited              kube-apiserver            1                   674aaf7b40abf       kube-apiserver-pause-585281            kube-system
	cd79c121a8019       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Exited              kindnet-cni               1                   a2e81392cf87b       kindnet-rvgcd                          kube-system
	3e717018e4db1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Exited              etcd                      1                   5d4b7f88c5f51       etcd-pause-585281                      kube-system
	f9491753f6ec7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Exited              kube-controller-manager   1                   3673971eb3a1e       kube-controller-manager-pause-585281   kube-system
	b3d2b33d28762       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   2 minutes ago       Exited              kube-scheduler            1                   7956d5f4f017d       kube-scheduler-pause-585281            kube-system
	3d3366a82a04a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Exited              kube-proxy                0                   dc4731b177834       kube-proxy-rv4j7                       kube-system
	
	
	==> coredns [2e09acaec05a41e735e1fd9867f0a8dd659729440cdc0700cbf72474301bfae9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57220 - 14329 "HINFO IN 2572719623816400100.8068358031023892758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009521226s
	
	
	==> coredns [5d52d78fc52433186f3c29b69422aeae2f0c3db8c1adcfdf65dedf62e4a27f1a] <==
	
	
	==> describe nodes <==
	Name:               pause-585281
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-585281
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-585281
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_08_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-585281
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:11:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:09:27 +0000   Sat, 08 Nov 2025 10:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-585281
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                54df2734-eac6-4be0-82ad-63063c5bfadc
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6644g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m57s
	  kube-system                 etcd-pause-585281                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         3m2s
	  kube-system                 kindnet-rvgcd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m57s
	  kube-system                 kube-apiserver-pause-585281             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-controller-manager-pause-585281    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m3s
	  kube-system                 kube-proxy-rv4j7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-scheduler-pause-585281             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m54s                  kube-proxy       
	  Normal   Starting                 12s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  3m10s (x8 over 3m10s)  kubelet          Node pause-585281 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m10s (x8 over 3m10s)  kubelet          Node pause-585281 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m10s (x8 over 3m10s)  kubelet          Node pause-585281 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 3m2s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m2s                   kubelet          Node pause-585281 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m2s                   kubelet          Node pause-585281 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m2s                   kubelet          Node pause-585281 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m2s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m58s                  node-controller  Node pause-585281 event: Registered Node pause-585281 in Controller
	  Normal   NodeReady                2m14s                  kubelet          Node pause-585281 status is now: NodeReady
	  Warning  ContainerGCFailed        62s                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             29s (x7 over 90s)      kubelet          Node pause-585281 status is now: NodeNotReady
	  Normal   RegisteredNode           12s                    node-controller  Node pause-585281 event: Registered Node pause-585281 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:42] overlayfs: idmapped layers are currently not supported
	[  +3.260945] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:43] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:44] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:45] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:50] overlayfs: idmapped layers are currently not supported
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e717018e4db1225a33be4045b2d1897c1b736eb0f7d54c1a6afd67748e324c0] <==
	
	
	==> etcd [6a1c4f1c1aebd9dc507372057dbea49115539de3328fd5fc6cc5c24ab0cfa8bf] <==
	{"level":"warn","ts":"2025-11-08T10:11:20.207282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.220207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.281599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.293690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.316636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.352627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.368458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.410997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.429083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.475602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.509424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.524515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.555127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.585658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.637263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.697182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.698097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.716406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.749046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.786771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.826588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.865648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.901111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:20.933507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:11:21.071573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:11:38 up  2:54,  0 user,  load average: 3.77, 2.47, 2.07
	Linux pause-585281 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5d01500a77310a76b4659b56909c387796572bf5f8c6be88ba3a86442f8ee91] <==
	I1108 10:11:14.077642       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:11:14.077935       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:11:14.078194       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:11:14.078241       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:11:14.086131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:11:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:11:14.417823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:11:14.437053       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:11:14.438499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:11:14.441255       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:11:22.839352       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:11:22.839495       1 metrics.go:72] Registering metrics
	I1108 10:11:22.839596       1 controller.go:711] "Syncing nftables rules"
	I1108 10:11:24.414973       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:11:24.415108       1 main.go:301] handling current node
	I1108 10:11:34.414036       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:11:34.414078       1 main.go:301] handling current node
	
	
	==> kindnet [cd79c121a8019fcb2c93baa98419929b529b33cd56a932dcc2771c55ae6e462c] <==
	
	
	==> kube-apiserver [05f96289e87d54b856123e4c909df122cf5ea7cafa2a1ea3251fd71853a64ef1] <==
	I1108 10:11:22.437437       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 10:11:22.610418       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:11:22.610447       1 policy_source.go:240] refreshing policies
	I1108 10:11:22.633622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:11:22.649146       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:11:22.649386       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:11:22.649406       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:11:22.649524       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:11:22.649565       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:11:22.649601       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:11:22.649639       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:11:22.670784       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:11:22.670812       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:11:22.670820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:11:22.675287       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:11:22.703524       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:11:22.719405       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:11:22.719453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:11:22.719644       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:11:22.770894       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:11:22.803193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:11:22.803311       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:11:22.816884       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:11:23.273085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:11:24.810079       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [bbd37a53b3ff900f2ae1d8b0266a6f002e6a17e20c476e6951de770c40fd31b1] <==
	
	
	==> kube-controller-manager [0f67a9dac80bea7b55941628cdf508d953ef6b68882be9639e87c90d34ad85c0] <==
	I1108 10:11:26.216609       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:11:26.220579       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:11:26.221050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:11:26.224094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:11:26.224546       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:11:26.228958       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:11:26.240597       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:11:26.241895       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:11:26.252440       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:11:26.256239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:11:26.263613       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:11:26.264163       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 10:11:26.265425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:11:26.265505       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:11:26.266807       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:11:26.266918       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:11:26.267026       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-585281"
	I1108 10:11:26.267090       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:11:26.267158       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:11:26.269328       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:11:26.291165       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:11:26.291267       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:11:26.291298       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:11:26.299639       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:11:26.303983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f9491753f6ec75b40577ec5da4f195b64c30357340a9a2f07567a89929f81bc7] <==
	
	
	==> kube-proxy [3d3366a82a04a0d348be12815d6091dcbdf94d13f14ca32a0c7e5d22a7109d78] <==
	I1108 10:08:43.878074       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:08:43.974591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:08:44.076513       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:08:44.076550       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:08:44.076650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:08:44.099970       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:08:44.100024       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:08:44.104510       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:08:44.104986       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:08:44.105048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:08:44.106445       1 config.go:200] "Starting service config controller"
	I1108 10:08:44.106515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:08:44.106564       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:08:44.106593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:08:44.106616       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:08:44.106620       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:08:44.107276       1 config.go:309] "Starting node config controller"
	I1108 10:08:44.107295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:08:44.107301       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:08:44.206702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:08:44.206717       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:08:44.206746       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b] <==
	
	
	==> kube-proxy [f5198d16f395454d32f4a28ffd87da5dbca345dbba28c04c3ea6f9ea1322b53f] <==
	I1108 10:11:22.482168       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:11:24.030677       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:11:24.367109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:11:24.367224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:11:24.373867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:11:25.821592       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:11:25.833023       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:11:25.899394       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:11:25.899797       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:11:25.900015       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:11:25.901299       1 config.go:200] "Starting service config controller"
	I1108 10:11:25.901359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:11:25.901401       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:11:25.901427       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:11:25.901461       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:11:25.901487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:11:25.902187       1 config.go:309] "Starting node config controller"
	I1108 10:11:25.904748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:11:25.904811       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:11:26.002469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:11:26.002573       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:11:26.002601       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9bd33a96a682a6bf1f4bd44fbc1b47163722c9b2f12470e8668831c357f338c0] <==
	I1108 10:11:22.519122       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:11:25.846248       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:11:25.849286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:11:25.854800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:11:25.855193       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:11:25.855269       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:11:25.855319       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:11:25.865888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:11:25.869590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:11:25.869030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.869725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.955523       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:11:25.970659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:11:25.970787       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b3d2b33d28762a416c5285c0c97c70b46ec8d299c7cf04769fff1e92b29b0419] <==
	
	
	==> kubelet <==
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863346    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863632    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8721fdd20213e2c8efee2be82951653" pod="kube-system/kube-scheduler-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.863883    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5679dbf1f61a74e3cb78cd593ed3ec9f" pod="kube-system/kube-apiserver-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864091    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0c36ac7bed6002836aed35c53aaf6af0" pod="kube-system/kube-controller-manager-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864331    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rv4j7\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81d952b7-1238-49d3-9e92-b4878ef4b207" pod="kube-system/kube-proxy-rv4j7"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.864583    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rvgcd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.895506    1304 scope.go:117] "RemoveContainer" containerID="bbe481630744841ccaed0db9fce6bb52bc510b3db87e24ad66ea3eebe37bebe9"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.918957    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-6644g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a079b8a-6641-49c0-9045-67e660dfa443" pod="kube-system/coredns-66bc5c9577-6644g"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919247    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919489    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8721fdd20213e2c8efee2be82951653" pod="kube-system/kube-scheduler-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919703    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5679dbf1f61a74e3cb78cd593ed3ec9f" pod="kube-system/kube-apiserver-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.919903    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-585281\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0c36ac7bed6002836aed35c53aaf6af0" pod="kube-system/kube-controller-manager-pause-585281"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.920127    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rv4j7\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81d952b7-1238-49d3-9e92-b4878ef4b207" pod="kube-system/kube-proxy-rv4j7"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: E1108 10:11:13.920312    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rvgcd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.942572    1304 scope.go:117] "RemoveContainer" containerID="c0521ccb74deafa2ff1de55dcde0a8e896e81ca44ba308ff269d6f5a89c789ed"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.977935    1304 scope.go:117] "RemoveContainer" containerID="bb3bc1b7161e709f57eb7e833763492d5028f77a4eda96f6b8dc67a64c5adfc1"
	Nov 08 10:11:13 pause-585281 kubelet[1304]: I1108 10:11:13.995768    1304 scope.go:117] "RemoveContainer" containerID="479d30e3e53d7f09520a9e0325d8e785b53081f9b1f26424b87e0c9430a03b2e"
	Nov 08 10:11:14 pause-585281 kubelet[1304]: I1108 10:11:14.014624    1304 scope.go:117] "RemoveContainer" containerID="f3ad499cb9437a4e259f93249bd95e93b63c48029618f98e02f9dc6922388226"
	Nov 08 10:11:21 pause-585281 kubelet[1304]: I1108 10:11:21.353175    1304 scope.go:117] "RemoveContainer" containerID="da0726e75d2420dce11abfbb6c5af513cc4ce254db7dbea8dda3e4a49316618b"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.402865    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-rvgcd\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="66cb6e5e-6ba4-4952-85dc-37b05e46b000" pod="kube-system/kindnet-rvgcd"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.531634    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-6644g\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="7a079b8a-6641-49c0-9045-67e660dfa443" pod="kube-system/coredns-66bc5c9577-6644g"
	Nov 08 10:11:22 pause-585281 kubelet[1304]: E1108 10:11:22.624754    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-585281\" is forbidden: User \"system:node:pause-585281\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-585281' and this object" podUID="76f74cdae75fdece871328a0e2fefc7a" pod="kube-system/etcd-pause-585281"
	Nov 08 10:11:32 pause-585281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:11:32 pause-585281 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:11:32 pause-585281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-585281 -n pause-585281
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-585281 -n pause-585281: exit status 2 (489.033952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-585281 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (272.196357ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:13:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-332573 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-332573 describe deploy/metrics-server -n kube-system: exit status 1 (84.595575ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-332573 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-332573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-332573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	        "Created": "2025-11-08T10:12:40.555240094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470851,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:12:40.620798564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hosts",
	        "LogPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35-json.log",
	        "Name": "/old-k8s-version-332573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-332573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-332573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	                "LowerDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-332573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-332573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-332573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6094c2c79a787a5910dc8c6653cad8edd8758176c1fe465b8be84a11db9aca3b",
	            "SandboxKey": "/var/run/docker/netns/6094c2c79a78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-332573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:05:5c:f7:fb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bc21555591f9a2508b903e9b9efd09495777b9b74fcdbe032a687f04b909be0",
	                    "EndpointID": "89cd5edee3d077b146bf94a0e8bcd4a757134b43308f3edd989ffc69407e6656",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-332573",
	                        "9c2d89f29f92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25: (1.260534811s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-099098 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo containerd config dump                                                                                                                                                                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo crio config                                                                                                                                                                                                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ delete  │ -p pause-585281                                                                                                                                                                                                                               │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ delete  │ -p force-systemd-env-000082                                                                                                                                                                                                                   │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ cert-options-916440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:12:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:12:34.698198  470460 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:12:34.698405  470460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:12:34.698433  470460 out.go:374] Setting ErrFile to fd 2...
	I1108 10:12:34.698452  470460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:12:34.698771  470460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:12:34.699271  470460 out.go:368] Setting JSON to false
	I1108 10:12:34.700229  470460 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10504,"bootTime":1762586251,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:12:34.700322  470460 start.go:143] virtualization:  
	I1108 10:12:34.704834  470460 out.go:179] * [old-k8s-version-332573] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:12:34.709467  470460 notify.go:221] Checking for updates...
	I1108 10:12:34.713073  470460 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:12:34.717138  470460 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:12:34.720595  470460 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:12:34.723718  470460 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:12:34.726799  470460 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:12:34.729841  470460 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:12:34.733481  470460 config.go:182] Loaded profile config "cert-expiration-328489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:12:34.733646  470460 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:12:34.781043  470460 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:12:34.781169  470460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:12:34.849052  470460 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:12:34.839026474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:12:34.849165  470460 docker.go:319] overlay module found
	I1108 10:12:34.852286  470460 out.go:179] * Using the docker driver based on user configuration
	I1108 10:12:34.855210  470460 start.go:309] selected driver: docker
	I1108 10:12:34.855237  470460 start.go:930] validating driver "docker" against <nil>
	I1108 10:12:34.855251  470460 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:12:34.856052  470460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:12:34.911825  470460 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:12:34.902780474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:12:34.911981  470460 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:12:34.912226  470460 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:12:34.915226  470460 out.go:179] * Using Docker driver with root privileges
	I1108 10:12:34.918414  470460 cni.go:84] Creating CNI manager for ""
	I1108 10:12:34.918488  470460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:12:34.918501  470460 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:12:34.918585  470460 start.go:353] cluster config:
	{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:12:34.921733  470460 out.go:179] * Starting "old-k8s-version-332573" primary control-plane node in "old-k8s-version-332573" cluster
	I1108 10:12:34.924562  470460 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:12:34.927495  470460 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:12:34.930333  470460 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:12:34.930395  470460 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:12:34.930408  470460 cache.go:59] Caching tarball of preloaded images
	I1108 10:12:34.930415  470460 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:12:34.930511  470460 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:12:34.930522  470460 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:12:34.930626  470460 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:12:34.930649  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json: {Name:mk23565eae5dcf675bcbfaa59dae97ef8c8b76c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:34.950378  470460 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:12:34.950401  470460 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:12:34.950417  470460 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:12:34.950440  470460 start.go:360] acquireMachinesLock for old-k8s-version-332573: {Name:mkf00cfa98960d68304c3826065c66fd6bccf2d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:12:34.950556  470460 start.go:364] duration metric: took 90.905µs to acquireMachinesLock for "old-k8s-version-332573"
	I1108 10:12:34.950590  470460 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:12:34.950656  470460 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:12:34.954123  470460 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:12:34.954360  470460 start.go:159] libmachine.API.Create for "old-k8s-version-332573" (driver="docker")
	I1108 10:12:34.954405  470460 client.go:173] LocalClient.Create starting
	I1108 10:12:34.954471  470460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:12:34.954508  470460 main.go:143] libmachine: Decoding PEM data...
	I1108 10:12:34.954525  470460 main.go:143] libmachine: Parsing certificate...
	I1108 10:12:34.954575  470460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:12:34.954597  470460 main.go:143] libmachine: Decoding PEM data...
	I1108 10:12:34.954611  470460 main.go:143] libmachine: Parsing certificate...
	I1108 10:12:34.954994  470460 cli_runner.go:164] Run: docker network inspect old-k8s-version-332573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:12:34.972373  470460 cli_runner.go:211] docker network inspect old-k8s-version-332573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:12:34.972469  470460 network_create.go:284] running [docker network inspect old-k8s-version-332573] to gather additional debugging logs...
	I1108 10:12:34.972492  470460 cli_runner.go:164] Run: docker network inspect old-k8s-version-332573
	W1108 10:12:34.989734  470460 cli_runner.go:211] docker network inspect old-k8s-version-332573 returned with exit code 1
	I1108 10:12:34.989787  470460 network_create.go:287] error running [docker network inspect old-k8s-version-332573]: docker network inspect old-k8s-version-332573: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-332573 not found
	I1108 10:12:34.989802  470460 network_create.go:289] output of [docker network inspect old-k8s-version-332573]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-332573 not found
	
	** /stderr **
	I1108 10:12:34.989931  470460 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:12:35.014050  470460 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:12:35.014448  470460 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:12:35.014697  470460 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:12:35.014993  470460 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8ca534186826 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:6e:38:a0:fa:5f} reservation:<nil>}
	I1108 10:12:35.015462  470460 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06900}
	I1108 10:12:35.015489  470460 network_create.go:124] attempt to create docker network old-k8s-version-332573 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:12:35.015552  470460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-332573 old-k8s-version-332573
	I1108 10:12:35.077381  470460 network_create.go:108] docker network old-k8s-version-332573 192.168.85.0/24 created
	I1108 10:12:35.077414  470460 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-332573" container
	I1108 10:12:35.077491  470460 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:12:35.096207  470460 cli_runner.go:164] Run: docker volume create old-k8s-version-332573 --label name.minikube.sigs.k8s.io=old-k8s-version-332573 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:12:35.115108  470460 oci.go:103] Successfully created a docker volume old-k8s-version-332573
	I1108 10:12:35.115194  470460 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-332573-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-332573 --entrypoint /usr/bin/test -v old-k8s-version-332573:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:12:35.644101  470460 oci.go:107] Successfully prepared a docker volume old-k8s-version-332573
	I1108 10:12:35.644172  470460 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:12:35.644207  470460 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:12:35.644294  470460 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-332573:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:12:40.465623  470460 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-332573:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.82128064s)
	I1108 10:12:40.465657  470460 kic.go:203] duration metric: took 4.821448895s to extract preloaded images to volume ...
	W1108 10:12:40.465839  470460 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:12:40.465959  470460 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:12:40.539408  470460 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-332573 --name old-k8s-version-332573 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-332573 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-332573 --network old-k8s-version-332573 --ip 192.168.85.2 --volume old-k8s-version-332573:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:12:40.839331  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Running}}
	I1108 10:12:40.870570  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:12:40.897696  470460 cli_runner.go:164] Run: docker exec old-k8s-version-332573 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:12:40.953038  470460 oci.go:144] the created container "old-k8s-version-332573" has a running status.
	I1108 10:12:40.953065  470460 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa...
	I1108 10:12:41.501279  470460 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:12:41.523037  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:12:41.545368  470460 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:12:41.545391  470460 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-332573 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:12:41.588795  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:12:41.608603  470460 machine.go:94] provisionDockerMachine start ...
	I1108 10:12:41.608700  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:41.627143  470460 main.go:143] libmachine: Using SSH client type: native
	I1108 10:12:41.627494  470460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1108 10:12:41.627510  470460 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:12:41.628147  470460 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:12:44.784660  470460 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:12:44.784682  470460 ubuntu.go:182] provisioning hostname "old-k8s-version-332573"
	I1108 10:12:44.784755  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:44.801788  470460 main.go:143] libmachine: Using SSH client type: native
	I1108 10:12:44.802096  470460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1108 10:12:44.802111  470460 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332573 && echo "old-k8s-version-332573" | sudo tee /etc/hostname
	I1108 10:12:44.966481  470460 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:12:44.966576  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:44.985640  470460 main.go:143] libmachine: Using SSH client type: native
	I1108 10:12:44.985957  470460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1108 10:12:44.985980  470460 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332573/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:12:45.183723  470460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:12:45.183778  470460 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:12:45.183805  470460 ubuntu.go:190] setting up certificates
	I1108 10:12:45.183815  470460 provision.go:84] configureAuth start
	I1108 10:12:45.183889  470460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:12:45.211013  470460 provision.go:143] copyHostCerts
	I1108 10:12:45.211199  470460 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:12:45.211219  470460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:12:45.211319  470460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:12:45.211478  470460 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:12:45.211496  470460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:12:45.211529  470460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:12:45.211628  470460 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:12:45.211639  470460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:12:45.211666  470460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:12:45.211787  470460 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332573 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-332573]
	I1108 10:12:45.514231  470460 provision.go:177] copyRemoteCerts
	I1108 10:12:45.514299  470460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:12:45.514343  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:45.532448  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:12:45.641018  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:12:45.659505  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:12:45.679367  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:12:45.698485  470460 provision.go:87] duration metric: took 514.655891ms to configureAuth
	I1108 10:12:45.698511  470460 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:12:45.698703  470460 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:12:45.698816  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:45.716618  470460 main.go:143] libmachine: Using SSH client type: native
	I1108 10:12:45.716969  470460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1108 10:12:45.716992  470460 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:12:45.982967  470460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:12:45.982990  470460 machine.go:97] duration metric: took 4.374363687s to provisionDockerMachine
	I1108 10:12:45.983000  470460 client.go:176] duration metric: took 11.028582897s to LocalClient.Create
	I1108 10:12:45.983013  470460 start.go:167] duration metric: took 11.028655324s to libmachine.API.Create "old-k8s-version-332573"
	I1108 10:12:45.983020  470460 start.go:293] postStartSetup for "old-k8s-version-332573" (driver="docker")
	I1108 10:12:45.983030  470460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:12:45.983106  470460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:12:45.983163  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:46.001874  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:12:46.113187  470460 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:12:46.116581  470460 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:12:46.116612  470460 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:12:46.116624  470460 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:12:46.116678  470460 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:12:46.116760  470460 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:12:46.116872  470460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:12:46.124459  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:12:46.143032  470460 start.go:296] duration metric: took 159.996915ms for postStartSetup
	I1108 10:12:46.143487  470460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:12:46.161313  470460 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:12:46.161609  470460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:12:46.161660  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:46.179901  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:12:46.282229  470460 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:12:46.287309  470460 start.go:128] duration metric: took 11.336638088s to createHost
	I1108 10:12:46.287340  470460 start.go:83] releasing machines lock for "old-k8s-version-332573", held for 11.336769338s
	I1108 10:12:46.287433  470460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:12:46.305114  470460 ssh_runner.go:195] Run: cat /version.json
	I1108 10:12:46.305141  470460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:12:46.305167  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:46.305216  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:12:46.328658  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:12:46.338611  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:12:46.527168  470460 ssh_runner.go:195] Run: systemctl --version
	I1108 10:12:46.533908  470460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:12:46.572240  470460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:12:46.577983  470460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:12:46.578052  470460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:12:46.608115  470460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:12:46.608140  470460 start.go:496] detecting cgroup driver to use...
	I1108 10:12:46.608183  470460 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:12:46.608236  470460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:12:46.626092  470460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:12:46.639328  470460 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:12:46.639410  470460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:12:46.655920  470460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:12:46.675805  470460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:12:46.802973  470460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:12:46.932753  470460 docker.go:234] disabling docker service ...
	I1108 10:12:46.932887  470460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:12:46.954166  470460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:12:46.968497  470460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:12:47.093360  470460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:12:47.216370  470460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:12:47.230946  470460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:12:47.246265  470460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:12:47.246351  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.255298  470460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:12:47.255377  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.264794  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.274110  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.283774  470460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:12:47.292325  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.301296  470460 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.315262  470460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:12:47.325138  470460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:12:47.337345  470460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:12:47.345069  470460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:12:47.456940  470460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:12:47.586403  470460 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:12:47.586499  470460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:12:47.590526  470460 start.go:564] Will wait 60s for crictl version
	I1108 10:12:47.590606  470460 ssh_runner.go:195] Run: which crictl
	I1108 10:12:47.594510  470460 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:12:47.622932  470460 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:12:47.623031  470460 ssh_runner.go:195] Run: crio --version
	I1108 10:12:47.653496  470460 ssh_runner.go:195] Run: crio --version
	I1108 10:12:47.685184  470460 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:12:47.688020  470460 cli_runner.go:164] Run: docker network inspect old-k8s-version-332573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:12:47.714194  470460 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:12:47.718247  470460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:12:47.728315  470460 kubeadm.go:884] updating cluster {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:12:47.728437  470460 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:12:47.728495  470460 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:12:47.761159  470460 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:12:47.761186  470460 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:12:47.761251  470460 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:12:47.787287  470460 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:12:47.787314  470460 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:12:47.787324  470460 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:12:47.787416  470460 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-332573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:12:47.787512  470460 ssh_runner.go:195] Run: crio config
	I1108 10:12:47.868521  470460 cni.go:84] Creating CNI manager for ""
	I1108 10:12:47.868540  470460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:12:47.868556  470460 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:12:47.868601  470460 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332573 NodeName:old-k8s-version-332573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:12:47.868800  470460 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-332573"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:12:47.868882  470460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:12:47.876817  470460 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:12:47.876900  470460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:12:47.884533  470460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:12:47.897424  470460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:12:47.911499  470460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:12:47.924895  470460 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:12:47.928821  470460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:12:47.939450  470460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:12:48.065294  470460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:12:48.084718  470460 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573 for IP: 192.168.85.2
	I1108 10:12:48.084784  470460 certs.go:195] generating shared ca certs ...
	I1108 10:12:48.084814  470460 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:48.085019  470460 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:12:48.085094  470460 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:12:48.085129  470460 certs.go:257] generating profile certs ...
	I1108 10:12:48.085206  470460 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.key
	I1108 10:12:48.085244  470460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt with IP's: []
	I1108 10:12:49.150004  470460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt ...
	I1108 10:12:49.150078  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: {Name:mkae976c4818bece1ce272d9f0399b80f8cbb87e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:49.150304  470460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.key ...
	I1108 10:12:49.150339  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.key: {Name:mkca814233f8a4704079485ea27b457493969f7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:49.150496  470460 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23
	I1108 10:12:49.150536  470460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt.99f33f23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:12:49.619150  470460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt.99f33f23 ...
	I1108 10:12:49.619181  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt.99f33f23: {Name:mk29f35130cd56686092349cc099b1bd223e566a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:49.619366  470460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23 ...
	I1108 10:12:49.619381  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23: {Name:mk760660b1cf6a1bc209b26588f85b3f47ccb9ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:49.619469  470460 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt.99f33f23 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt
	I1108 10:12:49.619554  470460 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key
	I1108 10:12:49.619616  470460 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key
	I1108 10:12:49.619635  470460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt with IP's: []
	I1108 10:12:50.391000  470460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt ...
	I1108 10:12:50.391033  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt: {Name:mk92f8003fef1c3cd88fd11f4ed7246f28d3e622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:50.391221  470460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key ...
	I1108 10:12:50.391239  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key: {Name:mk785d205a5451bf11f84f84f92fd5af1cb7afe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:12:50.391435  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:12:50.391482  470460 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:12:50.391496  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:12:50.391520  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:12:50.391548  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:12:50.391574  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:12:50.391619  470460 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:12:50.392216  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:12:50.414398  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:12:50.433283  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:12:50.451727  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:12:50.470252  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:12:50.490504  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:12:50.511674  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:12:50.530945  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:12:50.550455  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:12:50.571732  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:12:50.589460  470460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:12:50.610465  470460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:12:50.623029  470460 ssh_runner.go:195] Run: openssl version
	I1108 10:12:50.629212  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:12:50.637409  470460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:12:50.640839  470460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:12:50.640898  470460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:12:50.682317  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:12:50.691004  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:12:50.699566  470460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:12:50.703241  470460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:12:50.703307  470460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:12:50.744315  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:12:50.752550  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:12:50.761020  470460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:12:50.764660  470460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:12:50.764748  470460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:12:50.805986  470460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:12:50.814337  470460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:12:50.817984  470460 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:12:50.818046  470460 kubeadm.go:401] StartCluster: {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:12:50.818127  470460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:12:50.818196  470460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:12:50.850458  470460 cri.go:89] found id: ""
	I1108 10:12:50.850543  470460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:12:50.858665  470460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:12:50.866731  470460 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:12:50.866828  470460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:12:50.874827  470460 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:12:50.874844  470460 kubeadm.go:158] found existing configuration files:
	
	I1108 10:12:50.874895  470460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:12:50.882533  470460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:12:50.882594  470460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:12:50.889801  470460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:12:50.898134  470460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:12:50.898253  470460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:12:50.906519  470460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:12:50.914385  470460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:12:50.914454  470460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:12:50.921835  470460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:12:50.929942  470460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:12:50.930018  470460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:12:50.937656  470460 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:12:51.026605  470460 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:12:51.108602  470460 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:13:10.501559  470460 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1108 10:13:10.501621  470460 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:13:10.501718  470460 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:13:10.501787  470460 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:13:10.501830  470460 kubeadm.go:319] OS: Linux
	I1108 10:13:10.501878  470460 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:13:10.501933  470460 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:13:10.501986  470460 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:13:10.502040  470460 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:13:10.502093  470460 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:13:10.502149  470460 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:13:10.502201  470460 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:13:10.502261  470460 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:13:10.502314  470460 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:13:10.502394  470460 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:13:10.502497  470460 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:13:10.502601  470460 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 10:13:10.502671  470460 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:13:10.505730  470460 out.go:252]   - Generating certificates and keys ...
	I1108 10:13:10.505828  470460 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:13:10.505900  470460 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:13:10.505976  470460 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:13:10.506040  470460 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:13:10.506106  470460 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:13:10.506163  470460 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:13:10.506224  470460 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:13:10.506361  470460 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-332573] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:13:10.506421  470460 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:13:10.506555  470460 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-332573] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:13:10.506629  470460 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:13:10.506701  470460 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:13:10.506751  470460 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:13:10.506814  470460 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:13:10.506872  470460 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:13:10.506940  470460 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:13:10.507018  470460 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:13:10.507079  470460 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:13:10.507168  470460 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:13:10.507245  470460 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:13:10.510171  470460 out.go:252]   - Booting up control plane ...
	I1108 10:13:10.510314  470460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:13:10.510420  470460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:13:10.510524  470460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:13:10.510660  470460 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:13:10.510762  470460 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:13:10.510808  470460 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:13:10.510999  470460 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 10:13:10.511092  470460 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.503917 seconds
	I1108 10:13:10.511217  470460 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:13:10.511361  470460 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:13:10.511431  470460 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:13:10.511649  470460 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-332573 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:13:10.511714  470460 kubeadm.go:319] [bootstrap-token] Using token: j358fg.cj2grj2kxb3v08pc
	I1108 10:13:10.514592  470460 out.go:252]   - Configuring RBAC rules ...
	I1108 10:13:10.514729  470460 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:13:10.514829  470460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:13:10.515001  470460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:13:10.515148  470460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:13:10.515288  470460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:13:10.515402  470460 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:13:10.515535  470460 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:13:10.515587  470460 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:13:10.515642  470460 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:13:10.515651  470460 kubeadm.go:319] 
	I1108 10:13:10.515719  470460 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:13:10.515727  470460 kubeadm.go:319] 
	I1108 10:13:10.515814  470460 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:13:10.515822  470460 kubeadm.go:319] 
	I1108 10:13:10.515851  470460 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:13:10.515921  470460 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:13:10.515979  470460 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:13:10.515987  470460 kubeadm.go:319] 
	I1108 10:13:10.516049  470460 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:13:10.516057  470460 kubeadm.go:319] 
	I1108 10:13:10.516111  470460 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:13:10.516119  470460 kubeadm.go:319] 
	I1108 10:13:10.516178  470460 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:13:10.516266  470460 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:13:10.516346  470460 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:13:10.516356  470460 kubeadm.go:319] 
	I1108 10:13:10.516451  470460 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:13:10.516541  470460 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:13:10.516588  470460 kubeadm.go:319] 
	I1108 10:13:10.516709  470460 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j358fg.cj2grj2kxb3v08pc \
	I1108 10:13:10.517001  470460 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:13:10.517038  470460 kubeadm.go:319] 	--control-plane 
	I1108 10:13:10.517049  470460 kubeadm.go:319] 
	I1108 10:13:10.517165  470460 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:13:10.517191  470460 kubeadm.go:319] 
	I1108 10:13:10.517318  470460 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j358fg.cj2grj2kxb3v08pc \
	I1108 10:13:10.517464  470460 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:13:10.517509  470460 cni.go:84] Creating CNI manager for ""
	I1108 10:13:10.517538  470460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:13:10.522580  470460 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:13:10.525606  470460 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:13:10.533699  470460 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1108 10:13:10.533716  470460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:13:10.550007  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:13:11.688076  470460 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.138028598s)
	I1108 10:13:11.688182  470460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:13:11.688343  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:11.688513  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-332573 minikube.k8s.io/updated_at=2025_11_08T10_13_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=old-k8s-version-332573 minikube.k8s.io/primary=true
	I1108 10:13:12.022421  470460 ops.go:34] apiserver oom_adj: -16
	I1108 10:13:12.022549  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:12.522641  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:13.023536  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:13.523117  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:14.023402  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:14.523629  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:15.023647  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:15.523078  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:16.023422  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:16.523150  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:17.023261  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:17.523080  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:18.023405  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:18.522573  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:19.022922  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:19.522631  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:20.023421  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:20.523447  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:21.022652  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:21.523590  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:22.022614  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:22.523293  470460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:13:22.620392  470460 kubeadm.go:1114] duration metric: took 10.932104101s to wait for elevateKubeSystemPrivileges
	I1108 10:13:22.620418  470460 kubeadm.go:403] duration metric: took 31.802375587s to StartCluster
	I1108 10:13:22.620434  470460 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:13:22.620499  470460 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:13:22.621536  470460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:13:22.621759  470460 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:13:22.621918  470460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:13:22.622191  470460 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:13:22.622228  470460 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:13:22.622291  470460 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-332573"
	I1108 10:13:22.622305  470460 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-332573"
	I1108 10:13:22.622328  470460 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:13:22.622832  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:13:22.623288  470460 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-332573"
	I1108 10:13:22.623307  470460 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332573"
	I1108 10:13:22.623572  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:13:22.625411  470460 out.go:179] * Verifying Kubernetes components...
	I1108 10:13:22.628304  470460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:13:22.665777  470460 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:13:22.668684  470460 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:13:22.668706  470460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:13:22.668786  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:13:22.687102  470460 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-332573"
	I1108 10:13:22.687144  470460 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:13:22.687614  470460 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:13:22.710686  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:13:22.731287  470460 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:13:22.731310  470460 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:13:22.731399  470460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:13:22.762176  470460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:13:22.942374  470460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:13:22.988503  470460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:13:23.006182  470460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:13:23.034848  470460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:13:23.582759  470460 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:13:23.584660  470460 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:13:24.095709  470460 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-332573" context rescaled to 1 replicas
	I1108 10:13:24.208544  470460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.202266624s)
	I1108 10:13:24.208683  470460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.173760669s)
	I1108 10:13:24.218524  470460 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:13:24.221401  470460 addons.go:515] duration metric: took 1.599144499s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 10:13:25.588617  470460 node_ready.go:57] node "old-k8s-version-332573" has "Ready":"False" status (will retry)
	W1108 10:13:28.088729  470460 node_ready.go:57] node "old-k8s-version-332573" has "Ready":"False" status (will retry)
	W1108 10:13:30.590269  470460 node_ready.go:57] node "old-k8s-version-332573" has "Ready":"False" status (will retry)
	W1108 10:13:33.087689  470460 node_ready.go:57] node "old-k8s-version-332573" has "Ready":"False" status (will retry)
	W1108 10:13:35.087852  470460 node_ready.go:57] node "old-k8s-version-332573" has "Ready":"False" status (will retry)
	I1108 10:13:37.098647  470460 node_ready.go:49] node "old-k8s-version-332573" is "Ready"
	I1108 10:13:37.098672  470460 node_ready.go:38] duration metric: took 13.513776176s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:13:37.098685  470460 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:13:37.098753  470460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:13:37.128280  470460 api_server.go:72] duration metric: took 14.506492055s to wait for apiserver process to appear ...
	I1108 10:13:37.128339  470460 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:13:37.128358  470460 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:13:37.143315  470460 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:13:37.145015  470460 api_server.go:141] control plane version: v1.28.0
	I1108 10:13:37.145039  470460 api_server.go:131] duration metric: took 16.692655ms to wait for apiserver health ...
	I1108 10:13:37.145049  470460 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:13:37.149670  470460 system_pods.go:59] 8 kube-system pods found
	I1108 10:13:37.149707  470460 system_pods.go:61] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:13:37.149715  470460 system_pods.go:61] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running
	I1108 10:13:37.149721  470460 system_pods.go:61] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:13:37.149725  470460 system_pods.go:61] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running
	I1108 10:13:37.149730  470460 system_pods.go:61] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running
	I1108 10:13:37.149742  470460 system_pods.go:61] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:13:37.149746  470460 system_pods.go:61] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running
	I1108 10:13:37.149752  470460 system_pods.go:61] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:13:37.149758  470460 system_pods.go:74] duration metric: took 4.704028ms to wait for pod list to return data ...
	I1108 10:13:37.149767  470460 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:13:37.160675  470460 default_sa.go:45] found service account: "default"
	I1108 10:13:37.160746  470460 default_sa.go:55] duration metric: took 10.973342ms for default service account to be created ...
	I1108 10:13:37.160770  470460 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:13:37.165246  470460 system_pods.go:86] 8 kube-system pods found
	I1108 10:13:37.165331  470460 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:13:37.165361  470460 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running
	I1108 10:13:37.165399  470460 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:13:37.165422  470460 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running
	I1108 10:13:37.165441  470460 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running
	I1108 10:13:37.165461  470460 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:13:37.165504  470460 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running
	I1108 10:13:37.165526  470460 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:13:37.165587  470460 retry.go:31] will retry after 289.69813ms: missing components: kube-dns
	I1108 10:13:37.460020  470460 system_pods.go:86] 8 kube-system pods found
	I1108 10:13:37.460057  470460 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:13:37.460064  470460 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running
	I1108 10:13:37.460071  470460 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:13:37.460098  470460 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running
	I1108 10:13:37.460104  470460 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running
	I1108 10:13:37.460113  470460 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:13:37.460117  470460 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running
	I1108 10:13:37.460124  470460 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:13:37.460146  470460 retry.go:31] will retry after 361.533787ms: missing components: kube-dns
	I1108 10:13:37.844029  470460 system_pods.go:86] 8 kube-system pods found
	I1108 10:13:37.844068  470460 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:13:37.844075  470460 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running
	I1108 10:13:37.844081  470460 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:13:37.844085  470460 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running
	I1108 10:13:37.844090  470460 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running
	I1108 10:13:37.844094  470460 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:13:37.844097  470460 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running
	I1108 10:13:37.844104  470460 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:13:37.844123  470460 retry.go:31] will retry after 330.204928ms: missing components: kube-dns
	I1108 10:13:38.178473  470460 system_pods.go:86] 8 kube-system pods found
	I1108 10:13:38.178545  470460 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Running
	I1108 10:13:38.178568  470460 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running
	I1108 10:13:38.178581  470460 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:13:38.178586  470460 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running
	I1108 10:13:38.178591  470460 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running
	I1108 10:13:38.178595  470460 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:13:38.178599  470460 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running
	I1108 10:13:38.178615  470460 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Running
	I1108 10:13:38.178628  470460 system_pods.go:126] duration metric: took 1.017840154s to wait for k8s-apps to be running ...
	I1108 10:13:38.178636  470460 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:13:38.178690  470460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:13:38.196543  470460 system_svc.go:56] duration metric: took 17.89761ms WaitForService to wait for kubelet
	I1108 10:13:38.196570  470460 kubeadm.go:587] duration metric: took 15.574786633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:13:38.196589  470460 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:13:38.199311  470460 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:13:38.199344  470460 node_conditions.go:123] node cpu capacity is 2
	I1108 10:13:38.199358  470460 node_conditions.go:105] duration metric: took 2.762946ms to run NodePressure ...
	I1108 10:13:38.199370  470460 start.go:242] waiting for startup goroutines ...
	I1108 10:13:38.199378  470460 start.go:247] waiting for cluster config update ...
	I1108 10:13:38.199388  470460 start.go:256] writing updated cluster config ...
	I1108 10:13:38.199663  470460 ssh_runner.go:195] Run: rm -f paused
	I1108 10:13:38.204062  470460 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:13:38.209652  470460 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.214874  470460 pod_ready.go:94] pod "coredns-5dd5756b68-4s446" is "Ready"
	I1108 10:13:38.214901  470460 pod_ready.go:86] duration metric: took 5.218666ms for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.217963  470460 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.223648  470460 pod_ready.go:94] pod "etcd-old-k8s-version-332573" is "Ready"
	I1108 10:13:38.223684  470460 pod_ready.go:86] duration metric: took 5.649585ms for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.229233  470460 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.234505  470460 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-332573" is "Ready"
	I1108 10:13:38.234537  470460 pod_ready.go:86] duration metric: took 5.27428ms for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.238031  470460 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.608412  470460 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-332573" is "Ready"
	I1108 10:13:38.608439  470460 pod_ready.go:86] duration metric: took 370.382566ms for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:38.809248  470460 pod_ready.go:83] waiting for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:39.208228  470460 pod_ready.go:94] pod "kube-proxy-bn8tb" is "Ready"
	I1108 10:13:39.208258  470460 pod_ready.go:86] duration metric: took 398.986442ms for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:39.409173  470460 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:39.808959  470460 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-332573" is "Ready"
	I1108 10:13:39.808988  470460 pod_ready.go:86] duration metric: took 399.789723ms for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:13:39.809001  470460 pod_ready.go:40] duration metric: took 1.604901226s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:13:39.882651  470460 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:13:39.885824  470460 out.go:203] 
	W1108 10:13:39.888814  470460 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:13:39.891699  470460 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:13:39.895457  470460 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-332573" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:13:37 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:37.088416784Z" level=info msg="Created container dc58b5505bf47633bcfbdada9f402c45a37813cbe5773ee3def5e3de1d7102e0: kube-system/coredns-5dd5756b68-4s446/coredns" id=78dbfd7b-08cc-4944-81c7-f353a5a0ef23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:13:37 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:37.089384281Z" level=info msg="Starting container: dc58b5505bf47633bcfbdada9f402c45a37813cbe5773ee3def5e3de1d7102e0" id=b2371395-9d36-450c-867b-951d6be98531 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:13:37 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:37.092091547Z" level=info msg="Started container" PID=1951 containerID=dc58b5505bf47633bcfbdada9f402c45a37813cbe5773ee3def5e3de1d7102e0 description=kube-system/coredns-5dd5756b68-4s446/coredns id=b2371395-9d36-450c-867b-951d6be98531 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3715f3e1e099fb8e82863d898579e966f36d27f68142fdb25e9868a72cccd409
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.428835351Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e09e7cf6-ac29-4d6c-96a6-835dfd8b0b12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.4289523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.43945094Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b UID:8ced1743-f6f3-4055-9ed3-c5f2125a022a NetNS:/var/run/netns/182c8180-452d-43b9-a30d-276c1aafc818 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791f0}] Aliases:map[]}"
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.439665497Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.450944187Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b UID:8ced1743-f6f3-4055-9ed3-c5f2125a022a NetNS:/var/run/netns/182c8180-452d-43b9-a30d-276c1aafc818 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791f0}] Aliases:map[]}"
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.451136139Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.457555107Z" level=info msg="Ran pod sandbox 055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b with infra container: default/busybox/POD" id=e09e7cf6-ac29-4d6c-96a6-835dfd8b0b12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.458803213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93f2a3fe-6cfc-4aee-85d2-71256ede10ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.458942036Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=93f2a3fe-6cfc-4aee-85d2-71256ede10ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.458987936Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=93f2a3fe-6cfc-4aee-85d2-71256ede10ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.459531094Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10ae6b18-b415-46ae-a151-4edd605f39af name=/runtime.v1.ImageService/PullImage
	Nov 08 10:13:40 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:40.462333778Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.599098088Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=10ae6b18-b415-46ae-a151-4edd605f39af name=/runtime.v1.ImageService/PullImage
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.602417567Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eefef68f-15ed-4f0c-9908-a94858d922c3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.60416682Z" level=info msg="Creating container: default/busybox/busybox" id=6ce8f4d1-829b-449a-9d38-369a4cbe2608 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.604409218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.609506176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.609983168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.63068164Z" level=info msg="Created container 6c29d4184c8e02caa170a0fd6506d939f946c6529f59e0e848643adb170ea620: default/busybox/busybox" id=6ce8f4d1-829b-449a-9d38-369a4cbe2608 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.634325363Z" level=info msg="Starting container: 6c29d4184c8e02caa170a0fd6506d939f946c6529f59e0e848643adb170ea620" id=2bfd7a70-093d-4988-ad7a-0f66bb87b4e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:13:42 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:42.638674148Z" level=info msg="Started container" PID=2007 containerID=6c29d4184c8e02caa170a0fd6506d939f946c6529f59e0e848643adb170ea620 description=default/busybox/busybox id=2bfd7a70-093d-4988-ad7a-0f66bb87b4e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b
	Nov 08 10:13:49 old-k8s-version-332573 crio[839]: time="2025-11-08T10:13:49.344221319Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	6c29d4184c8e0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   055e4606a990e       busybox                                          default
	dc58b5505bf47       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   3715f3e1e099f       coredns-5dd5756b68-4s446                         kube-system
	11e333d77299f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   03145250c451d       storage-provisioner                              kube-system
	91dc34ce65127       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   ad852649113f8       kindnet-qg5t6                                    kube-system
	c6eb58cfbd178       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   a59f893c3dff3       kube-proxy-bn8tb                                 kube-system
	49b326858499b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   2f79e28bdde4d       kube-scheduler-old-k8s-version-332573            kube-system
	0ad206d2e8612       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   681ebf1740c2a       etcd-old-k8s-version-332573                      kube-system
	92460aa8e37fd       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   84d86e4e17c74       kube-controller-manager-old-k8s-version-332573   kube-system
	2d6ab35e8761a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   4ce6882a2197a       kube-apiserver-old-k8s-version-332573            kube-system
	
	
	==> coredns [dc58b5505bf47633bcfbdada9f402c45a37813cbe5773ee3def5e3de1d7102e0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39331 - 45830 "HINFO IN 6146084360963222482.5111682196839564269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052669609s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-332573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-332573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-332573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_13_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-332573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:13:41 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:13:41 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:13:41 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:13:41 +0000   Sat, 08 Nov 2025 10:13:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-332573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d2774f32-76bc-4924-aa00-9e91907fb5f7
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-4s446                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-332573                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         44s
	  kube-system                 kindnet-qg5t6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-332573             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-332573    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-bn8tb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-332573             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-332573 event: Registered Node old-k8s-version-332573 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-332573 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 09:44] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:45] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:50] overlayfs: idmapped layers are currently not supported
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ad206d2e8612cce26997d8e35ad69c3207c76b8c433b09efac2aff7c28f1992] <==
	{"level":"info","ts":"2025-11-08T10:13:02.927327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:13:02.930824Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:13:02.929984Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T10:13:02.931128Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T10:13:02.930043Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:13:02.933031Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:13:02.933293Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T10:13:03.683136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-08T10:13:03.683279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-08T10:13:03.68335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-08T10:13:03.683448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:13:03.683518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:13:03.683554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-08T10:13:03.683597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:13:03.689109Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-332573 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:13:03.689199Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:13:03.689256Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:13:03.689506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:13:03.690456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-08T10:13:03.693793Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:13:03.693826Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T10:13:03.69389Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:13:03.69396Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:13:03.694286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T10:13:03.694331Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:13:51 up  2:56,  0 user,  load average: 2.79, 2.88, 2.30
	Linux old-k8s-version-332573 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91dc34ce651274ccf8ca72def82bc31aa730af02b7379f59673bc6b4d998ca95] <==
	I1108 10:13:26.221798       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:13:26.222028       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:13:26.222154       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:13:26.222165       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:13:26.222174       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:13:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:13:26.518328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:13:26.518391       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:13:26.518780       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:13:26.539988       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:13:26.619941       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:13:26.619967       1 metrics.go:72] Registering metrics
	I1108 10:13:26.620021       1 controller.go:711] "Syncing nftables rules"
	I1108 10:13:36.520997       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:13:36.521060       1 main.go:301] handling current node
	I1108 10:13:46.519932       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:13:46.519970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2d6ab35e8761add8119d5bf103758b0d029fb329e6cf94ccc997cdb03f10f896] <==
	I1108 10:13:07.057208       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:13:07.057852       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:13:07.057940       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:13:07.059355       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:13:07.077894       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 10:13:07.078006       1 aggregator.go:166] initial CRD sync complete...
	I1108 10:13:07.078037       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 10:13:07.078064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:13:07.078091       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:13:07.087160       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:13:07.765523       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:13:07.772522       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:13:07.772543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:13:08.524496       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:13:08.609047       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:13:08.679556       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:13:08.688666       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:13:08.689907       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 10:13:08.694991       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:13:08.870928       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:13:10.377853       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:13:10.392506       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:13:10.406421       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 10:13:22.179125       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:13:22.332833       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [92460aa8e37fd3e26a18a30a9d8b818abc488190b443f2f98d5e8804c3dfd579] <==
	I1108 10:13:21.870080       1 shared_informer.go:318] Caches are synced for disruption
	I1108 10:13:21.870083       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:13:21.901394       1 shared_informer.go:318] Caches are synced for persistent volume
	I1108 10:13:22.185069       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1108 10:13:22.301372       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:13:22.315534       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:13:22.315583       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:13:22.344631       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bn8tb"
	I1108 10:13:22.351513       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qg5t6"
	I1108 10:13:22.784978       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gn9l8"
	I1108 10:13:22.838340       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4s446"
	I1108 10:13:22.869412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="684.880714ms"
	I1108 10:13:22.902375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.911494ms"
	I1108 10:13:22.902477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.47µs"
	I1108 10:13:23.668488       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1108 10:13:23.754065       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-gn9l8"
	I1108 10:13:23.778725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.029144ms"
	I1108 10:13:23.809119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.328751ms"
	I1108 10:13:23.818987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.028µs"
	I1108 10:13:36.670688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.137µs"
	I1108 10:13:36.710329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.597µs"
	I1108 10:13:36.718288       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1108 10:13:37.848103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="174.344µs"
	I1108 10:13:37.879978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.820895ms"
	I1108 10:13:37.880152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.162µs"
	
	
	==> kube-proxy [c6eb58cfbd17837953f73bc33217a3b4114194994309dca78849afa95c142b00] <==
	I1108 10:13:23.446473       1 server_others.go:69] "Using iptables proxy"
	I1108 10:13:23.461576       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:13:23.513627       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:13:23.522093       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:13:23.522145       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:13:23.522153       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:13:23.522181       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:13:23.522379       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:13:23.522389       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:13:23.523608       1 config.go:188] "Starting service config controller"
	I1108 10:13:23.523619       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:13:23.523637       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:13:23.523641       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:13:23.524001       1 config.go:315] "Starting node config controller"
	I1108 10:13:23.524009       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:13:23.624744       1 shared_informer.go:318] Caches are synced for node config
	I1108 10:13:23.624785       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:13:23.624813       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [49b326858499b34555fa7c818184959173dee01d955761dcdb2a7d5c5b0d170e] <==
	W1108 10:13:07.017412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 10:13:07.017445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 10:13:07.017454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 10:13:07.017520       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 10:13:07.017570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 10:13:07.017587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 10:13:07.017601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 10:13:07.017704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 10:13:07.017660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 10:13:07.017717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 10:13:07.018604       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 10:13:07.018629       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:13:07.828346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 10:13:07.828481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 10:13:07.894643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 10:13:07.894699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 10:13:07.973220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 10:13:07.973361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 10:13:08.118421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 10:13:08.118517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 10:13:08.184756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 10:13:08.184797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 10:13:08.202221       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 10:13:08.202345       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 10:13:11.193045       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: I1108 10:13:22.384171    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2634489a-0805-4e5b-9e11-39bd98299cf9-lib-modules\") pod \"kindnet-qg5t6\" (UID: \"2634489a-0805-4e5b-9e11-39bd98299cf9\") " pod="kube-system/kindnet-qg5t6"
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: I1108 10:13:22.384282    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9983ee1d-1280-460a-8b5e-183f0cd5fc26-xtables-lock\") pod \"kube-proxy-bn8tb\" (UID: \"9983ee1d-1280-460a-8b5e-183f0cd5fc26\") " pod="kube-system/kube-proxy-bn8tb"
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.496456    1397 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.496664    1397 projected.go:198] Error preparing data for projected volume kube-api-access-twvwf for pod kube-system/kindnet-qg5t6: configmap "kube-root-ca.crt" not found
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.496948    1397 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.496971    1397 projected.go:198] Error preparing data for projected volume kube-api-access-vjkrn for pod kube-system/kube-proxy-bn8tb: configmap "kube-root-ca.crt" not found
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.497011    1397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2634489a-0805-4e5b-9e11-39bd98299cf9-kube-api-access-twvwf podName:2634489a-0805-4e5b-9e11-39bd98299cf9 nodeName:}" failed. No retries permitted until 2025-11-08 10:13:22.996785585 +0000 UTC m=+12.655594929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-twvwf" (UniqueName: "kubernetes.io/projected/2634489a-0805-4e5b-9e11-39bd98299cf9-kube-api-access-twvwf") pod "kindnet-qg5t6" (UID: "2634489a-0805-4e5b-9e11-39bd98299cf9") : configmap "kube-root-ca.crt" not found
	Nov 08 10:13:22 old-k8s-version-332573 kubelet[1397]: E1108 10:13:22.497031    1397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9983ee1d-1280-460a-8b5e-183f0cd5fc26-kube-api-access-vjkrn podName:9983ee1d-1280-460a-8b5e-183f0cd5fc26 nodeName:}" failed. No retries permitted until 2025-11-08 10:13:22.997019851 +0000 UTC m=+12.655829195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vjkrn" (UniqueName: "kubernetes.io/projected/9983ee1d-1280-460a-8b5e-183f0cd5fc26-kube-api-access-vjkrn") pod "kube-proxy-bn8tb" (UID: "9983ee1d-1280-460a-8b5e-183f0cd5fc26") : configmap "kube-root-ca.crt" not found
	Nov 08 10:13:23 old-k8s-version-332573 kubelet[1397]: W1108 10:13:23.269380    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-a59f893c3dff3f706d9bd0c533c8aad84d32561658e58b80888924260e4eaa35 WatchSource:0}: Error finding container a59f893c3dff3f706d9bd0c533c8aad84d32561658e58b80888924260e4eaa35: Status 404 returned error can't find the container with id a59f893c3dff3f706d9bd0c533c8aad84d32561658e58b80888924260e4eaa35
	Nov 08 10:13:23 old-k8s-version-332573 kubelet[1397]: W1108 10:13:23.297994    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-ad852649113f897b57812d3a589baa5fa8e89f985b2fa16ce114a67deef010b7 WatchSource:0}: Error finding container ad852649113f897b57812d3a589baa5fa8e89f985b2fa16ce114a67deef010b7: Status 404 returned error can't find the container with id ad852649113f897b57812d3a589baa5fa8e89f985b2fa16ce114a67deef010b7
	Nov 08 10:13:23 old-k8s-version-332573 kubelet[1397]: I1108 10:13:23.819505    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bn8tb" podStartSLOduration=1.819450963 podCreationTimestamp="2025-11-08 10:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:13:23.819097082 +0000 UTC m=+13.477906434" watchObservedRunningTime="2025-11-08 10:13:23.819450963 +0000 UTC m=+13.478260315"
	Nov 08 10:13:30 old-k8s-version-332573 kubelet[1397]: I1108 10:13:30.585221    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-qg5t6" podStartSLOduration=5.747004051 podCreationTimestamp="2025-11-08 10:13:22 +0000 UTC" firstStartedPulling="2025-11-08 10:13:23.306603373 +0000 UTC m=+12.965412717" lastFinishedPulling="2025-11-08 10:13:26.144776332 +0000 UTC m=+15.803585676" observedRunningTime="2025-11-08 10:13:26.805493488 +0000 UTC m=+16.464302848" watchObservedRunningTime="2025-11-08 10:13:30.58517701 +0000 UTC m=+20.243986370"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.619514    1397 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.668458    1397 topology_manager.go:215] "Topology Admit Handler" podUID="c1b3815e-fae2-49ce-acba-3dcfc39bf058" podNamespace="kube-system" podName="coredns-5dd5756b68-4s446"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.673233    1397 topology_manager.go:215] "Topology Admit Handler" podUID="3942a7b8-f620-491e-8fdf-5ff17477030f" podNamespace="kube-system" podName="storage-provisioner"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.793698    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1b3815e-fae2-49ce-acba-3dcfc39bf058-config-volume\") pod \"coredns-5dd5756b68-4s446\" (UID: \"c1b3815e-fae2-49ce-acba-3dcfc39bf058\") " pod="kube-system/coredns-5dd5756b68-4s446"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.793755    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsflq\" (UniqueName: \"kubernetes.io/projected/c1b3815e-fae2-49ce-acba-3dcfc39bf058-kube-api-access-jsflq\") pod \"coredns-5dd5756b68-4s446\" (UID: \"c1b3815e-fae2-49ce-acba-3dcfc39bf058\") " pod="kube-system/coredns-5dd5756b68-4s446"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.793787    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlfxl\" (UniqueName: \"kubernetes.io/projected/3942a7b8-f620-491e-8fdf-5ff17477030f-kube-api-access-nlfxl\") pod \"storage-provisioner\" (UID: \"3942a7b8-f620-491e-8fdf-5ff17477030f\") " pod="kube-system/storage-provisioner"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: I1108 10:13:36.793819    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3942a7b8-f620-491e-8fdf-5ff17477030f-tmp\") pod \"storage-provisioner\" (UID: \"3942a7b8-f620-491e-8fdf-5ff17477030f\") " pod="kube-system/storage-provisioner"
	Nov 08 10:13:36 old-k8s-version-332573 kubelet[1397]: W1108 10:13:36.987794    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-03145250c451d1db4bca46a9a5f3e4ef3fde27862820af03be3e0db44f513a48 WatchSource:0}: Error finding container 03145250c451d1db4bca46a9a5f3e4ef3fde27862820af03be3e0db44f513a48: Status 404 returned error can't find the container with id 03145250c451d1db4bca46a9a5f3e4ef3fde27862820af03be3e0db44f513a48
	Nov 08 10:13:37 old-k8s-version-332573 kubelet[1397]: I1108 10:13:37.863128    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4s446" podStartSLOduration=15.863085528 podCreationTimestamp="2025-11-08 10:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:13:37.84746913 +0000 UTC m=+27.506278474" watchObservedRunningTime="2025-11-08 10:13:37.863085528 +0000 UTC m=+27.521894872"
	Nov 08 10:13:40 old-k8s-version-332573 kubelet[1397]: I1108 10:13:40.126203    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.126141243 podCreationTimestamp="2025-11-08 10:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:13:37.887273891 +0000 UTC m=+27.546083235" watchObservedRunningTime="2025-11-08 10:13:40.126141243 +0000 UTC m=+29.784950595"
	Nov 08 10:13:40 old-k8s-version-332573 kubelet[1397]: I1108 10:13:40.126373    1397 topology_manager.go:215] "Topology Admit Handler" podUID="8ced1743-f6f3-4055-9ed3-c5f2125a022a" podNamespace="default" podName="busybox"
	Nov 08 10:13:40 old-k8s-version-332573 kubelet[1397]: I1108 10:13:40.225998    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjtkz\" (UniqueName: \"kubernetes.io/projected/8ced1743-f6f3-4055-9ed3-c5f2125a022a-kube-api-access-rjtkz\") pod \"busybox\" (UID: \"8ced1743-f6f3-4055-9ed3-c5f2125a022a\") " pod="default/busybox"
	Nov 08 10:13:40 old-k8s-version-332573 kubelet[1397]: W1108 10:13:40.455306    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b WatchSource:0}: Error finding container 055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b: Status 404 returned error can't find the container with id 055e4606a990ed40d9243442b57fe39b3c91f7af06b18cf21c48395d913b687b
	
	
	==> storage-provisioner [11e333d77299f2e9414bd7853c368c371f16b204c0dd0db64a0b2bb56b5b00ba] <==
	I1108 10:13:37.048746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:13:37.066369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:13:37.066415       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:13:37.083916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:13:37.084213       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fe955f4b-4d93-4b7a-abb6-c235a1c7a148!
	I1108 10:13:37.085252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e016727-b435-4896-8e63-48348502e137", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-332573_fe955f4b-4d93-4b7a-abb6-c235a1c7a148 became leader
	I1108 10:13:37.184393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fe955f4b-4d93-4b7a-abb6-c235a1c7a148!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-332573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-332573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-332573 --alsologtostderr -v=1: exit status 80 (2.579066539s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-332573 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:15:09.991540  476277 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:15:09.991662  476277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:15:09.991672  476277 out.go:374] Setting ErrFile to fd 2...
	I1108 10:15:09.991677  476277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:15:09.991935  476277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:15:09.992292  476277 out.go:368] Setting JSON to false
	I1108 10:15:09.992322  476277 mustload.go:66] Loading cluster: old-k8s-version-332573
	I1108 10:15:09.992719  476277 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:15:09.993397  476277 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:15:10.024702  476277 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:15:10.025121  476277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:15:10.093561  476277 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:15:10.082530647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:15:10.094424  476277 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-332573 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:15:10.101021  476277 out.go:179] * Pausing node old-k8s-version-332573 ... 
	I1108 10:15:10.106206  476277 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:15:10.106611  476277 ssh_runner.go:195] Run: systemctl --version
	I1108 10:15:10.106662  476277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:15:10.125268  476277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:15:10.231769  476277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:15:10.256236  476277 pause.go:52] kubelet running: true
	I1108 10:15:10.256321  476277 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:15:10.500292  476277 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:15:10.500387  476277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:15:10.573773  476277 cri.go:89] found id: "e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7"
	I1108 10:15:10.573794  476277 cri.go:89] found id: "f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91"
	I1108 10:15:10.573799  476277 cri.go:89] found id: "1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	I1108 10:15:10.573803  476277 cri.go:89] found id: "3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077"
	I1108 10:15:10.573806  476277 cri.go:89] found id: "6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1"
	I1108 10:15:10.573810  476277 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:15:10.573813  476277 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:15:10.573816  476277 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:15:10.573820  476277 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:15:10.573826  476277 cri.go:89] found id: "6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	I1108 10:15:10.573830  476277 cri.go:89] found id: "f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c"
	I1108 10:15:10.573833  476277 cri.go:89] found id: ""
	I1108 10:15:10.573911  476277 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:15:10.587939  476277 retry.go:31] will retry after 224.718154ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:15:10Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:15:10.813410  476277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:15:10.827364  476277 pause.go:52] kubelet running: false
	I1108 10:15:10.827428  476277 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:15:11.019192  476277 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:15:11.019339  476277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:15:11.099275  476277 cri.go:89] found id: "e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7"
	I1108 10:15:11.099303  476277 cri.go:89] found id: "f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91"
	I1108 10:15:11.099309  476277 cri.go:89] found id: "1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	I1108 10:15:11.099313  476277 cri.go:89] found id: "3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077"
	I1108 10:15:11.099317  476277 cri.go:89] found id: "6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1"
	I1108 10:15:11.099321  476277 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:15:11.099324  476277 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:15:11.099327  476277 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:15:11.099330  476277 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:15:11.099337  476277 cri.go:89] found id: "6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	I1108 10:15:11.099350  476277 cri.go:89] found id: "f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c"
	I1108 10:15:11.099353  476277 cri.go:89] found id: ""
	I1108 10:15:11.099410  476277 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:15:11.111597  476277 retry.go:31] will retry after 478.91508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:15:11Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:15:11.591325  476277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:15:11.604724  476277 pause.go:52] kubelet running: false
	I1108 10:15:11.604811  476277 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:15:11.769897  476277 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:15:11.770006  476277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:15:11.840797  476277 cri.go:89] found id: "e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7"
	I1108 10:15:11.840831  476277 cri.go:89] found id: "f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91"
	I1108 10:15:11.840836  476277 cri.go:89] found id: "1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	I1108 10:15:11.840840  476277 cri.go:89] found id: "3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077"
	I1108 10:15:11.840843  476277 cri.go:89] found id: "6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1"
	I1108 10:15:11.840847  476277 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:15:11.840849  476277 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:15:11.840852  476277 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:15:11.840856  476277 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:15:11.840874  476277 cri.go:89] found id: "6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	I1108 10:15:11.840891  476277 cri.go:89] found id: "f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c"
	I1108 10:15:11.840895  476277 cri.go:89] found id: ""
	I1108 10:15:11.840981  476277 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:15:11.853636  476277 retry.go:31] will retry after 351.958239ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:15:11Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:15:12.206377  476277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:15:12.220168  476277 pause.go:52] kubelet running: false
	I1108 10:15:12.220273  476277 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:15:12.404090  476277 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:15:12.404236  476277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:15:12.482105  476277 cri.go:89] found id: "e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7"
	I1108 10:15:12.482173  476277 cri.go:89] found id: "f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91"
	I1108 10:15:12.482184  476277 cri.go:89] found id: "1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	I1108 10:15:12.482189  476277 cri.go:89] found id: "3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077"
	I1108 10:15:12.482193  476277 cri.go:89] found id: "6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1"
	I1108 10:15:12.482197  476277 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:15:12.482200  476277 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:15:12.482203  476277 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:15:12.482206  476277 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:15:12.482223  476277 cri.go:89] found id: "6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	I1108 10:15:12.482227  476277 cri.go:89] found id: "f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c"
	I1108 10:15:12.482230  476277 cri.go:89] found id: ""
	I1108 10:15:12.482283  476277 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:15:12.497508  476277 out.go:203] 
	W1108 10:15:12.500875  476277 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:15:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:15:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:15:12.500939  476277 out.go:285] * 
	* 
	W1108 10:15:12.508157  476277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:15:12.511400  476277 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-332573 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-332573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-332573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	        "Created": "2025-11-08T10:12:40.555240094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:14:04.552952769Z",
	            "FinishedAt": "2025-11-08T10:14:03.653947049Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hosts",
	        "LogPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35-json.log",
	        "Name": "/old-k8s-version-332573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-332573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-332573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	                "LowerDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-332573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-332573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-332573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3807cb5972c484f69df394bdb261b57d3b3711469eb60d92bbb662c666bcf4ff",
	            "SandboxKey": "/var/run/docker/netns/3807cb5972c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-332573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:2e:16:af:fa:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bc21555591f9a2508b903e9b9efd09495777b9b74fcdbe032a687f04b909be0",
	                    "EndpointID": "77af5758ea12d64e474f337ba40f3838841cb8cd440ca0c2a9d2498eb54c20c6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-332573",
	                        "9c2d89f29f92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573: exit status 2 (361.229185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25: (1.401940654s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-099098 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo containerd config dump                                                                                                                                                                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo crio config                                                                                                                                                                                                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ delete  │ -p pause-585281                                                                                                                                                                                                                               │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ delete  │ -p force-systemd-env-000082                                                                                                                                                                                                                   │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ cert-options-916440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:14:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:14:04.235115  474052 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:14:04.235294  474052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:04.235326  474052 out.go:374] Setting ErrFile to fd 2...
	I1108 10:14:04.235348  474052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:04.235739  474052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:14:04.236280  474052 out.go:368] Setting JSON to false
	I1108 10:14:04.237727  474052 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10594,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:14:04.237845  474052 start.go:143] virtualization:  
	I1108 10:14:04.240998  474052 out.go:179] * [old-k8s-version-332573] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:14:04.244889  474052 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:14:04.245057  474052 notify.go:221] Checking for updates...
	I1108 10:14:04.250762  474052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:14:04.253775  474052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:04.256827  474052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:14:04.259842  474052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:14:04.262796  474052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:14:04.266197  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:04.269721  474052 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1108 10:14:04.272664  474052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:14:04.315496  474052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:14:04.315609  474052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:14:04.388382  474052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:14:04.378261784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:14:04.388500  474052 docker.go:319] overlay module found
	I1108 10:14:04.393561  474052 out.go:179] * Using the docker driver based on existing profile
	I1108 10:14:04.396491  474052 start.go:309] selected driver: docker
	I1108 10:14:04.396520  474052 start.go:930] validating driver "docker" against &{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:04.396622  474052 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:14:04.397439  474052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:14:04.451085  474052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:14:04.441778956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:14:04.451432  474052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:14:04.451465  474052 cni.go:84] Creating CNI manager for ""
	I1108 10:14:04.451523  474052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:14:04.451570  474052 start.go:353] cluster config:
	{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:04.456674  474052 out.go:179] * Starting "old-k8s-version-332573" primary control-plane node in "old-k8s-version-332573" cluster
	I1108 10:14:04.459593  474052 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:14:04.462691  474052 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:14:04.465575  474052 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:14:04.465638  474052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:14:04.465650  474052 cache.go:59] Caching tarball of preloaded images
	I1108 10:14:04.465736  474052 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:14:04.465752  474052 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:14:04.465870  474052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:14:04.466090  474052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:14:04.485729  474052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:14:04.485753  474052 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:14:04.485772  474052 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:14:04.485795  474052 start.go:360] acquireMachinesLock for old-k8s-version-332573: {Name:mkf00cfa98960d68304c3826065c66fd6bccf2d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:14:04.485865  474052 start.go:364] duration metric: took 47.647µs to acquireMachinesLock for "old-k8s-version-332573"
	I1108 10:14:04.485894  474052 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:14:04.485902  474052 fix.go:54] fixHost starting: 
	I1108 10:14:04.486195  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:04.511093  474052 fix.go:112] recreateIfNeeded on old-k8s-version-332573: state=Stopped err=<nil>
	W1108 10:14:04.511121  474052 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:14:04.514299  474052 out.go:252] * Restarting existing docker container for "old-k8s-version-332573" ...
	I1108 10:14:04.514400  474052 cli_runner.go:164] Run: docker start old-k8s-version-332573
	I1108 10:14:04.775322  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:04.804831  474052 kic.go:430] container "old-k8s-version-332573" state is running.
	I1108 10:14:04.805253  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:04.831045  474052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:14:04.831277  474052 machine.go:94] provisionDockerMachine start ...
	I1108 10:14:04.831363  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:04.855995  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:04.856325  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:04.856341  474052 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:14:04.857216  474052 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:14:08.013175  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:14:08.013247  474052 ubuntu.go:182] provisioning hostname "old-k8s-version-332573"
	I1108 10:14:08.013354  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:08.031940  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:08.032254  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:08.032269  474052 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332573 && echo "old-k8s-version-332573" | sudo tee /etc/hostname
	I1108 10:14:08.193288  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:14:08.193369  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:08.214574  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:08.214898  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:08.214920  474052 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332573/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:14:08.365341  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:14:08.365367  474052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:14:08.365408  474052 ubuntu.go:190] setting up certificates
	I1108 10:14:08.365420  474052 provision.go:84] configureAuth start
	I1108 10:14:08.365486  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:08.384359  474052 provision.go:143] copyHostCerts
	I1108 10:14:08.384437  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:14:08.384456  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:14:08.384530  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:14:08.384631  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:14:08.384646  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:14:08.384676  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:14:08.384731  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:14:08.384740  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:14:08.384764  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:14:08.384818  474052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332573 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-332573]
	I1108 10:14:09.602789  474052 provision.go:177] copyRemoteCerts
	I1108 10:14:09.602865  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:14:09.602906  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:09.621396  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:09.729700  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:14:09.750252  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:14:09.769568  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:14:09.789190  474052 provision.go:87] duration metric: took 1.423741166s to configureAuth
	I1108 10:14:09.789214  474052 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:14:09.789436  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:09.789538  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:09.806518  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:09.806824  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:09.806844  474052 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:14:10.137812  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:14:10.137857  474052 machine.go:97] duration metric: took 5.306563877s to provisionDockerMachine
	I1108 10:14:10.137874  474052 start.go:293] postStartSetup for "old-k8s-version-332573" (driver="docker")
	I1108 10:14:10.137887  474052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:14:10.137959  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:14:10.138020  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.164542  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.277141  474052 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:14:10.280581  474052 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:14:10.280607  474052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:14:10.280619  474052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:14:10.280680  474052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:14:10.280766  474052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:14:10.280886  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:14:10.288472  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:14:10.306162  474052 start.go:296] duration metric: took 168.269557ms for postStartSetup
	I1108 10:14:10.306310  474052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:14:10.306378  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.323448  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.426468  474052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:14:10.431158  474052 fix.go:56] duration metric: took 5.945247585s for fixHost
	I1108 10:14:10.431183  474052 start.go:83] releasing machines lock for "old-k8s-version-332573", held for 5.94530544s
	I1108 10:14:10.431306  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:10.447825  474052 ssh_runner.go:195] Run: cat /version.json
	I1108 10:14:10.447883  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.448153  474052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:14:10.448213  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.468378  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.478465  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.690283  474052 ssh_runner.go:195] Run: systemctl --version
	I1108 10:14:10.698314  474052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:14:10.740389  474052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:14:10.745210  474052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:14:10.745279  474052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:14:10.753989  474052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:14:10.754015  474052 start.go:496] detecting cgroup driver to use...
	I1108 10:14:10.754077  474052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:14:10.754142  474052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:14:10.769872  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:14:10.783177  474052 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:14:10.783254  474052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:14:10.799745  474052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:14:10.813275  474052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:14:10.935512  474052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:14:11.059541  474052 docker.go:234] disabling docker service ...
	I1108 10:14:11.059613  474052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:14:11.076538  474052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:14:11.091890  474052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:14:11.213572  474052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:14:11.341754  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:14:11.355789  474052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:14:11.371301  474052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:14:11.371366  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.382253  474052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:14:11.382321  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.391161  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.399551  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.408883  474052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:14:11.417487  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.426942  474052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.435641  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.444399  474052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:14:11.452789  474052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:14:11.460631  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:11.578853  474052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:14:11.713660  474052 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:14:11.713728  474052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:14:11.717910  474052 start.go:564] Will wait 60s for crictl version
	I1108 10:14:11.717979  474052 ssh_runner.go:195] Run: which crictl
	I1108 10:14:11.721837  474052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:14:11.751057  474052 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:14:11.751141  474052 ssh_runner.go:195] Run: crio --version
	I1108 10:14:11.780765  474052 ssh_runner.go:195] Run: crio --version
	I1108 10:14:11.811808  474052 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:14:11.814590  474052 cli_runner.go:164] Run: docker network inspect old-k8s-version-332573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:14:11.831244  474052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:14:11.835330  474052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:14:11.845365  474052 kubeadm.go:884] updating cluster {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:14:11.845481  474052 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:14:11.845540  474052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:14:11.880753  474052 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:14:11.880776  474052 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:14:11.880830  474052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:14:11.907119  474052 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:14:11.907145  474052 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:14:11.907158  474052 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:14:11.907262  474052 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-332573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:14:11.907351  474052 ssh_runner.go:195] Run: crio config
	I1108 10:14:11.965256  474052 cni.go:84] Creating CNI manager for ""
	I1108 10:14:11.965281  474052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:14:11.965303  474052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:14:11.965328  474052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332573 NodeName:old-k8s-version-332573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:14:11.965476  474052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-332573"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:14:11.965554  474052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:14:11.974486  474052 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:14:11.974562  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:14:11.982261  474052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:14:11.995216  474052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:14:12.017153  474052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:14:12.032568  474052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:14:12.036216  474052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:14:12.046681  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:12.171928  474052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:14:12.188259  474052 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573 for IP: 192.168.85.2
	I1108 10:14:12.188282  474052 certs.go:195] generating shared ca certs ...
	I1108 10:14:12.188298  474052 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:12.188438  474052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:14:12.188488  474052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:14:12.188500  474052 certs.go:257] generating profile certs ...
	I1108 10:14:12.188585  474052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.key
	I1108 10:14:12.188659  474052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23
	I1108 10:14:12.188699  474052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key
	I1108 10:14:12.188825  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:14:12.188858  474052 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:14:12.188869  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:14:12.188891  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:14:12.188950  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:14:12.188978  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:14:12.189028  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:14:12.189677  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:14:12.215480  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:14:12.236811  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:14:12.259035  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:14:12.282952  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:14:12.306516  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:14:12.331377  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:14:12.358653  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:14:12.377464  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:14:12.404500  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:14:12.427117  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:14:12.448977  474052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:14:12.464417  474052 ssh_runner.go:195] Run: openssl version
	I1108 10:14:12.472830  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:14:12.483101  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.492586  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.492659  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.547339  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:14:12.556711  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:14:12.566040  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.569859  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.569927  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.611983  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:14:12.620940  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:14:12.630317  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.634620  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.634745  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.676108  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:14:12.684352  474052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:14:12.688507  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:14:12.731343  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:14:12.772953  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:14:12.822239  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:14:12.875432  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:14:12.949967  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:14:13.016399  474052 kubeadm.go:401] StartCluster: {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:13.016528  474052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:14:13.016614  474052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:14:13.082003  474052 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:14:13.082080  474052 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:14:13.082100  474052 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:14:13.082123  474052 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:14:13.082144  474052 cri.go:89] found id: ""
	I1108 10:14:13.082219  474052 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:14:13.107910  474052 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:14:13Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:14:13.108041  474052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:14:13.117768  474052 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:14:13.117840  474052 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:14:13.117905  474052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:14:13.132456  474052 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:14:13.133209  474052 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-332573" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:13.133532  474052 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-332573" cluster setting kubeconfig missing "old-k8s-version-332573" context setting]
	I1108 10:14:13.134065  474052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.135512  474052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:14:13.146223  474052 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:14:13.146299  474052 kubeadm.go:602] duration metric: took 28.438782ms to restartPrimaryControlPlane
	I1108 10:14:13.146323  474052 kubeadm.go:403] duration metric: took 129.934776ms to StartCluster
	I1108 10:14:13.146359  474052 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.146438  474052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:13.147404  474052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.147674  474052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:14:13.148113  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:13.148164  474052 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:14:13.148332  474052 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-332573"
	I1108 10:14:13.148359  474052 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-332573"
	W1108 10:14:13.148427  474052 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:14:13.148392  474052 addons.go:70] Setting dashboard=true in profile "old-k8s-version-332573"
	I1108 10:14:13.148485  474052 addons.go:239] Setting addon dashboard=true in "old-k8s-version-332573"
	W1108 10:14:13.148491  474052 addons.go:248] addon dashboard should already be in state true
	I1108 10:14:13.148507  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.149290  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.149486  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.149922  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.148401  474052 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-332573"
	I1108 10:14:13.150497  474052 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332573"
	I1108 10:14:13.150754  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.152248  474052 out.go:179] * Verifying Kubernetes components...
	I1108 10:14:13.164326  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:13.209441  474052 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:14:13.213601  474052 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:14:13.213728  474052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:14:13.218260  474052 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:14:13.218282  474052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:14:13.218347  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.218549  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:14:13.218563  474052 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:14:13.218605  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.237410  474052 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-332573"
	W1108 10:14:13.237431  474052 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:14:13.237456  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.237891  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.290116  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.293923  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.311153  474052 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:14:13.311174  474052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:14:13.311244  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.344476  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.495341  474052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:14:13.530581  474052 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:14:13.560054  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:14:13.560122  474052 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:14:13.585310  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:14:13.589409  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:14:13.589482  474052 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:14:13.633880  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:14:13.640428  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:14:13.640456  474052 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:14:13.665509  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:14:13.665535  474052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:14:13.745090  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:14:13.745117  474052 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:14:13.804665  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:14:13.804699  474052 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:14:13.901942  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:14:13.901969  474052 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:14:13.959664  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:14:13.959690  474052 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:14:13.977119  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:14:13.977148  474052 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:14:14.002207  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:14:17.847834  474052 node_ready.go:49] node "old-k8s-version-332573" is "Ready"
	I1108 10:14:17.847866  474052 node_ready.go:38] duration metric: took 4.317199138s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:14:17.847880  474052 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:14:17.847938  474052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:14:19.367118  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.781724066s)
	I1108 10:14:19.367181  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.733279822s)
	I1108 10:14:20.058955  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.056698155s)
	I1108 10:14:20.059191  474052 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.211235585s)
	I1108 10:14:20.059226  474052 api_server.go:72] duration metric: took 6.911485086s to wait for apiserver process to appear ...
	I1108 10:14:20.059236  474052 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:14:20.059256  474052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:14:20.062102  474052 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-332573 addons enable metrics-server
	
	I1108 10:14:20.065097  474052 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1108 10:14:20.068035  474052 addons.go:515] duration metric: took 6.919857105s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1108 10:14:20.069002  474052 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:14:20.070466  474052 api_server.go:141] control plane version: v1.28.0
	I1108 10:14:20.070512  474052 api_server.go:131] duration metric: took 11.26885ms to wait for apiserver health ...
	I1108 10:14:20.070522  474052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:14:20.074729  474052 system_pods.go:59] 8 kube-system pods found
	I1108 10:14:20.074768  474052 system_pods.go:61] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:14:20.074781  474052 system_pods.go:61] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:14:20.074789  474052 system_pods.go:61] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:14:20.074797  474052 system_pods.go:61] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:14:20.074805  474052 system_pods.go:61] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:14:20.074814  474052 system_pods.go:61] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:14:20.074822  474052 system_pods.go:61] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:14:20.074830  474052 system_pods.go:61] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Running
	I1108 10:14:20.074836  474052 system_pods.go:74] duration metric: took 4.308623ms to wait for pod list to return data ...
	I1108 10:14:20.074845  474052 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:14:20.078373  474052 default_sa.go:45] found service account: "default"
	I1108 10:14:20.078402  474052 default_sa.go:55] duration metric: took 3.543022ms for default service account to be created ...
	I1108 10:14:20.078416  474052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:14:20.082460  474052 system_pods.go:86] 8 kube-system pods found
	I1108 10:14:20.082494  474052 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:14:20.082505  474052 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:14:20.082512  474052 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:14:20.082520  474052 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:14:20.082531  474052 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:14:20.082545  474052 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:14:20.082552  474052 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:14:20.082557  474052 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Running
	I1108 10:14:20.082574  474052 system_pods.go:126] duration metric: took 4.145037ms to wait for k8s-apps to be running ...
	I1108 10:14:20.082589  474052 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:14:20.082650  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:14:20.101654  474052 system_svc.go:56] duration metric: took 19.053869ms WaitForService to wait for kubelet
	I1108 10:14:20.101693  474052 kubeadm.go:587] duration metric: took 6.953959531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:14:20.101714  474052 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:14:20.106013  474052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:14:20.106052  474052 node_conditions.go:123] node cpu capacity is 2
	I1108 10:14:20.106065  474052 node_conditions.go:105] duration metric: took 4.343848ms to run NodePressure ...
	I1108 10:14:20.106078  474052 start.go:242] waiting for startup goroutines ...
	I1108 10:14:20.106085  474052 start.go:247] waiting for cluster config update ...
	I1108 10:14:20.106096  474052 start.go:256] writing updated cluster config ...
	I1108 10:14:20.106401  474052 ssh_runner.go:195] Run: rm -f paused
	I1108 10:14:20.110584  474052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:14:20.115192  474052 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:14:22.120444  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:24.121246  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:26.621769  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:29.121676  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:31.621294  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:34.121979  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:36.126200  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:38.621478  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:40.623175  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:42.626669  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:45.126003  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:47.620570  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:49.625381  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:52.121354  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:54.621216  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	I1108 10:14:56.621538  474052 pod_ready.go:94] pod "coredns-5dd5756b68-4s446" is "Ready"
	I1108 10:14:56.621571  474052 pod_ready.go:86] duration metric: took 36.50634954s for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.624797  474052 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.630152  474052 pod_ready.go:94] pod "etcd-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.630184  474052 pod_ready.go:86] duration metric: took 5.353219ms for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.633507  474052 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.638486  474052 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.638511  474052 pod_ready.go:86] duration metric: took 4.975181ms for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.641670  474052 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.819001  474052 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.819034  474052 pod_ready.go:86] duration metric: took 177.335734ms for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.019968  474052 pod_ready.go:83] waiting for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.418725  474052 pod_ready.go:94] pod "kube-proxy-bn8tb" is "Ready"
	I1108 10:14:57.418752  474052 pod_ready.go:86] duration metric: took 398.753669ms for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.619342  474052 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:58.018462  474052 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-332573" is "Ready"
	I1108 10:14:58.018493  474052 pod_ready.go:86] duration metric: took 399.12219ms for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:58.018506  474052 pod_ready.go:40] duration metric: took 37.907888318s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:14:58.072791  474052 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:14:58.075973  474052 out.go:203] 
	W1108 10:14:58.079023  474052 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:14:58.081982  474052 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:14:58.085305  474052 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-332573" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.208410196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.215987036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.216528849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.237250955Z" level=info msg="Created container 6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper" id=7ad89db2-d1e4-492a-bebf-5f40f3370c0a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.238515628Z" level=info msg="Starting container: 6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861" id=650d44a4-d67a-4e5c-9d59-694801e67d16 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.240655573Z" level=info msg="Started container" PID=1641 containerID=6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper id=650d44a4-d67a-4e5c-9d59-694801e67d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2
	Nov 08 10:14:51 old-k8s-version-332573 conmon[1639]: conmon 6494beae2f608531b20d <ninfo>: container 1641 exited with status 1
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.593148085Z" level=info msg="Removing container: 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.603433522Z" level=info msg="Error loading conmon cgroup of container 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1: cgroup deleted" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.60802905Z" level=info msg="Removed container 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.23303839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237917858Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237955069Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237977691Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241242091Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241279548Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241303778Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244744188Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244783605Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244814096Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247887135Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247920604Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247942257Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.251352562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.251391611Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6494beae2f608       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   67be495f17939       dashboard-metrics-scraper-5f989dc9cf-pjspf       kubernetes-dashboard
	e07a28f291b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   d7ae8867c7c04       storage-provisioner                              kube-system
	f43848c6090cd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   0f53852feb7d6       kubernetes-dashboard-8694d4445c-xppkg            kubernetes-dashboard
	7678cbdbaf440       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   ce9a18b52f96f       busybox                                          default
	f35906ed98b83       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   c6a549ef5c079       coredns-5dd5756b68-4s446                         kube-system
	1a824beedc294       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   d7ae8867c7c04       storage-provisioner                              kube-system
	3005567625f32       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   641023971e7fe       kindnet-qg5t6                                    kube-system
	6faa522a3460f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   decf33fd09a54       kube-proxy-bn8tb                                 kube-system
	da1e13436901f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   5c9047851c1a4       kube-scheduler-old-k8s-version-332573            kube-system
	c4401d75cbcf9       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   9f5fbdb6abd85       kube-controller-manager-old-k8s-version-332573   kube-system
	ddf965d723cdc       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   7f4adb7008601       kube-apiserver-old-k8s-version-332573            kube-system
	f1403b9fcd37e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   4ab28b7e9f2d5       etcd-old-k8s-version-332573                      kube-system
	
	
	==> coredns [f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55534 - 46108 "HINFO IN 6170447637721567867.9205345165333730542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014643715s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-332573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-332573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-332573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_13_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-332573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:15:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-332573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d2774f32-76bc-4924-aa00-9e91907fb5f7
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-4s446                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-332573                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-qg5t6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-332573             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-332573    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-bn8tb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-332573             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pjspf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xppkg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-332573 event: Registered Node old-k8s-version-332573 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-332573 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-332573 event: Registered Node old-k8s-version-332573 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:45] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:50] overlayfs: idmapped layers are currently not supported
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916] <==
	{"level":"info","ts":"2025-11-08T10:14:13.447036Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:14:13.447056Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:14:13.447297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:14:13.447371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:14:13.447459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:14:13.447484Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:14:13.478902Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T10:14:13.479086Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T10:14:13.479106Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T10:14:13.479194Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:14:13.479203Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:14:14.918142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.923254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-332573 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:14:14.923302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:14:14.927015Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-08T10:14:14.92332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:14:14.932975Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:14:14.933014Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T10:14:14.964323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:15:13 up  2:57,  0 user,  load average: 2.14, 2.62, 2.25
	Linux old-k8s-version-332573 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077] <==
	I1108 10:14:19.032834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:14:19.041224       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:14:19.041444       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:14:19.041459       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:14:19.041473       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:14:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:14:19.230830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:14:19.230849       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:14:19.230857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:14:19.231156       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:14:49.232599       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:14:49.232601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:14:49.232771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:14:49.232836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:14:50.230948       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:14:50.230995       1 metrics.go:72] Registering metrics
	I1108 10:14:50.231049       1 controller.go:711] "Syncing nftables rules"
	I1108 10:14:59.231996       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:14:59.232699       1 main.go:301] handling current node
	I1108 10:15:09.237283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:15:09.237392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89] <==
	I1108 10:14:17.821930       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 10:14:17.828291       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 10:14:17.830542       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:14:17.830677       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 10:14:17.831335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:14:17.835040       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 10:14:17.835131       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 10:14:17.837962       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:14:17.838739       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:14:17.839601       1 aggregator.go:166] initial CRD sync complete...
	I1108 10:14:17.839621       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 10:14:17.839627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:14:17.839633       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:14:17.895309       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:14:18.537324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:14:19.856270       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:14:19.900880       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:14:19.926691       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:14:19.938716       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:14:19.948449       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:14:20.023149       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.133.233"}
	I1108 10:14:20.051264       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.165.36"}
	I1108 10:14:30.223384       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:14:30.280526       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 10:14:30.672864       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089] <==
	I1108 10:14:30.323756       1 shared_informer.go:318] Caches are synced for persistent volume
	I1108 10:14:30.581727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="391.271218ms"
	I1108 10:14:30.581884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.485µs"
	I1108 10:14:30.586872       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-pjspf"
	I1108 10:14:30.586900       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xppkg"
	I1108 10:14:30.602383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="369.258221ms"
	I1108 10:14:30.611372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="382.224306ms"
	I1108 10:14:30.637319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.877605ms"
	I1108 10:14:30.637713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.658µs"
	I1108 10:14:30.644315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.876252ms"
	I1108 10:14:30.646412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.172µs"
	I1108 10:14:30.656594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.533µs"
	I1108 10:14:30.676324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.627µs"
	I1108 10:14:30.742149       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:14:30.769424       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:14:30.769458       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:14:36.573607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.13924ms"
	I1108 10:14:36.575498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.32µs"
	I1108 10:14:40.574252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.69µs"
	I1108 10:14:41.580100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.315µs"
	I1108 10:14:42.581749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.053µs"
	I1108 10:14:51.609268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.237µs"
	I1108 10:14:56.161077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.740406ms"
	I1108 10:14:56.161183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.262µs"
	I1108 10:15:01.221009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.863µs"
	
	
	==> kube-proxy [6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1] <==
	I1108 10:14:19.207747       1 server_others.go:69] "Using iptables proxy"
	I1108 10:14:19.233371       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:14:19.259223       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:14:19.261360       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:14:19.261453       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:14:19.261580       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:14:19.261637       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:14:19.262043       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:14:19.262290       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:14:19.263009       1 config.go:188] "Starting service config controller"
	I1108 10:14:19.263075       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:14:19.263117       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:14:19.263142       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:14:19.265273       1 config.go:315] "Starting node config controller"
	I1108 10:14:19.266249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:14:19.364046       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 10:14:19.364096       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:14:19.366877       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839] <==
	I1108 10:14:16.264582       1 serving.go:348] Generated self-signed cert in-memory
	W1108 10:14:17.802080       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:14:17.802111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:14:17.802122       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:14:17.802129       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:14:17.861109       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 10:14:17.861144       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:14:17.864779       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 10:14:17.864890       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:14:17.864905       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:14:17.864940       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 10:14:17.969095       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.608302     776 topology_manager.go:215] "Topology Admit Handler" podUID="daa8854a-6b69-46b9-8b93-303b0882bea4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791757     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e12916d3-9c5b-4931-b373-89d06b906ff5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pjspf\" (UID: \"e12916d3-9c5b-4931-b373-89d06b906ff5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791822     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxgm\" (UniqueName: \"kubernetes.io/projected/e12916d3-9c5b-4931-b373-89d06b906ff5-kube-api-access-8bxgm\") pod \"dashboard-metrics-scraper-5f989dc9cf-pjspf\" (UID: \"e12916d3-9c5b-4931-b373-89d06b906ff5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791851     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxfh\" (UniqueName: \"kubernetes.io/projected/daa8854a-6b69-46b9-8b93-303b0882bea4-kube-api-access-xnxfh\") pod \"kubernetes-dashboard-8694d4445c-xppkg\" (UID: \"daa8854a-6b69-46b9-8b93-303b0882bea4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791880     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/daa8854a-6b69-46b9-8b93-303b0882bea4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xppkg\" (UID: \"daa8854a-6b69-46b9-8b93-303b0882bea4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: W1108 10:14:30.938426     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc WatchSource:0}: Error finding container 0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc: Status 404 returned error can't find the container with id 0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc
	Nov 08 10:14:31 old-k8s-version-332573 kubelet[776]: W1108 10:14:31.225054     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2 WatchSource:0}: Error finding container 67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2: Status 404 returned error can't find the container with id 67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2
	Nov 08 10:14:36 old-k8s-version-332573 kubelet[776]: I1108 10:14:36.555937     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg" podStartSLOduration=1.809151145 podCreationTimestamp="2025-11-08 10:14:30 +0000 UTC" firstStartedPulling="2025-11-08 10:14:30.941213049 +0000 UTC m=+18.748569452" lastFinishedPulling="2025-11-08 10:14:35.687253218 +0000 UTC m=+23.494609629" observedRunningTime="2025-11-08 10:14:36.554147574 +0000 UTC m=+24.361503976" watchObservedRunningTime="2025-11-08 10:14:36.555191322 +0000 UTC m=+24.362547733"
	Nov 08 10:14:40 old-k8s-version-332573 kubelet[776]: I1108 10:14:40.553125     776 scope.go:117] "RemoveContainer" containerID="007c2059e190983c40d17103d7876fee02e67f7461631cb93a493bfd7a392825"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: I1108 10:14:41.557299     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: E1108 10:14:41.559018     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: I1108 10:14:41.559638     776 scope.go:117] "RemoveContainer" containerID="007c2059e190983c40d17103d7876fee02e67f7461631cb93a493bfd7a392825"
	Nov 08 10:14:42 old-k8s-version-332573 kubelet[776]: I1108 10:14:42.560758     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:42 old-k8s-version-332573 kubelet[776]: E1108 10:14:42.561085     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:14:49 old-k8s-version-332573 kubelet[776]: I1108 10:14:49.578236     776 scope.go:117] "RemoveContainer" containerID="1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.205115     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.587045     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.587280     776 scope.go:117] "RemoveContainer" containerID="6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: E1108 10:14:51.587610     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:15:01 old-k8s-version-332573 kubelet[776]: I1108 10:15:01.205126     776 scope.go:117] "RemoveContainer" containerID="6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	Nov 08 10:15:01 old-k8s-version-332573 kubelet[776]: E1108 10:15:01.205456     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:15:10 old-k8s-version-332573 kubelet[776]: I1108 10:15:10.441227     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c] <==
	2025/11/08 10:14:35 Using namespace: kubernetes-dashboard
	2025/11/08 10:14:35 Using in-cluster config to connect to apiserver
	2025/11/08 10:14:35 Using secret token for csrf signing
	2025/11/08 10:14:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:14:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:14:35 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 10:14:35 Generating JWE encryption key
	2025/11/08 10:14:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:14:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:14:36 Initializing JWE encryption key from synchronized object
	2025/11/08 10:14:36 Creating in-cluster Sidecar client
	2025/11/08 10:14:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:14:36 Serving insecurely on HTTP port: 9090
	2025/11/08 10:15:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:14:35 Starting overwatch
	
	
	==> storage-provisioner [1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7] <==
	I1108 10:14:19.170120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:14:49.172674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7] <==
	I1108 10:14:49.631317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:14:49.644062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:14:49.644105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:15:07.050335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:15:07.050512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d!
	I1108 10:15:07.050915       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e016727-b435-4896-8e63-48348502e137", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d became leader
	I1108 10:15:07.151380       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-332573 -n old-k8s-version-332573: exit status 2 (380.207481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-332573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-332573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-332573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	        "Created": "2025-11-08T10:12:40.555240094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:14:04.552952769Z",
	            "FinishedAt": "2025-11-08T10:14:03.653947049Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/hosts",
	        "LogPath": "/var/lib/docker/containers/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35-json.log",
	        "Name": "/old-k8s-version-332573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-332573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-332573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35",
	                "LowerDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9d1f462c8c27c4cdb58d2636a0f43049369f6eef19703e5e55789345ed2d59b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-332573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-332573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-332573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-332573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3807cb5972c484f69df394bdb261b57d3b3711469eb60d92bbb662c666bcf4ff",
	            "SandboxKey": "/var/run/docker/netns/3807cb5972c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-332573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:2e:16:af:fa:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bc21555591f9a2508b903e9b9efd09495777b9b74fcdbe032a687f04b909be0",
	                    "EndpointID": "77af5758ea12d64e474f337ba40f3838841cb8cd440ca0c2a9d2498eb54c20c6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-332573",
	                        "9c2d89f29f92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573: exit status 2 (391.14942ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-332573 logs -n 25: (1.293742918s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-099098 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo containerd config dump                                                                                                                                                                                                  │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ ssh     │ -p cilium-099098 sudo crio config                                                                                                                                                                                                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ delete  │ -p pause-585281                                                                                                                                                                                                                               │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ delete  │ -p force-systemd-env-000082                                                                                                                                                                                                                   │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ cert-options-916440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:14:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:14:04.235115  474052 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:14:04.235294  474052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:04.235326  474052 out.go:374] Setting ErrFile to fd 2...
	I1108 10:14:04.235348  474052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:04.235739  474052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:14:04.236280  474052 out.go:368] Setting JSON to false
	I1108 10:14:04.237727  474052 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10594,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:14:04.237845  474052 start.go:143] virtualization:  
	I1108 10:14:04.240998  474052 out.go:179] * [old-k8s-version-332573] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:14:04.244889  474052 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:14:04.245057  474052 notify.go:221] Checking for updates...
	I1108 10:14:04.250762  474052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:14:04.253775  474052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:04.256827  474052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:14:04.259842  474052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:14:04.262796  474052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:14:04.266197  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:04.269721  474052 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1108 10:14:04.272664  474052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:14:04.315496  474052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:14:04.315609  474052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:14:04.388382  474052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:14:04.378261784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:14:04.388500  474052 docker.go:319] overlay module found
	I1108 10:14:04.393561  474052 out.go:179] * Using the docker driver based on existing profile
	I1108 10:14:04.396491  474052 start.go:309] selected driver: docker
	I1108 10:14:04.396520  474052 start.go:930] validating driver "docker" against &{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:04.396622  474052 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:14:04.397439  474052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:14:04.451085  474052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:14:04.441778956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:14:04.451432  474052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:14:04.451465  474052 cni.go:84] Creating CNI manager for ""
	I1108 10:14:04.451523  474052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:14:04.451570  474052 start.go:353] cluster config:
	{Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:04.456674  474052 out.go:179] * Starting "old-k8s-version-332573" primary control-plane node in "old-k8s-version-332573" cluster
	I1108 10:14:04.459593  474052 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:14:04.462691  474052 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:14:04.465575  474052 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:14:04.465638  474052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:14:04.465650  474052 cache.go:59] Caching tarball of preloaded images
	I1108 10:14:04.465736  474052 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:14:04.465752  474052 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:14:04.465870  474052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:14:04.466090  474052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:14:04.485729  474052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:14:04.485753  474052 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:14:04.485772  474052 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:14:04.485795  474052 start.go:360] acquireMachinesLock for old-k8s-version-332573: {Name:mkf00cfa98960d68304c3826065c66fd6bccf2d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:14:04.485865  474052 start.go:364] duration metric: took 47.647µs to acquireMachinesLock for "old-k8s-version-332573"
	I1108 10:14:04.485894  474052 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:14:04.485902  474052 fix.go:54] fixHost starting: 
	I1108 10:14:04.486195  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:04.511093  474052 fix.go:112] recreateIfNeeded on old-k8s-version-332573: state=Stopped err=<nil>
	W1108 10:14:04.511121  474052 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:14:04.514299  474052 out.go:252] * Restarting existing docker container for "old-k8s-version-332573" ...
	I1108 10:14:04.514400  474052 cli_runner.go:164] Run: docker start old-k8s-version-332573
	I1108 10:14:04.775322  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:04.804831  474052 kic.go:430] container "old-k8s-version-332573" state is running.
	I1108 10:14:04.805253  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:04.831045  474052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/config.json ...
	I1108 10:14:04.831277  474052 machine.go:94] provisionDockerMachine start ...
	I1108 10:14:04.831363  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:04.855995  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:04.856325  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:04.856341  474052 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:14:04.857216  474052 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:14:08.013175  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:14:08.013247  474052 ubuntu.go:182] provisioning hostname "old-k8s-version-332573"
	I1108 10:14:08.013354  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:08.031940  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:08.032254  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:08.032269  474052 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332573 && echo "old-k8s-version-332573" | sudo tee /etc/hostname
	I1108 10:14:08.193288  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332573
	
	I1108 10:14:08.193369  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:08.214574  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:08.214898  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:08.214920  474052 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332573/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:14:08.365341  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:14:08.365367  474052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:14:08.365408  474052 ubuntu.go:190] setting up certificates
	I1108 10:14:08.365420  474052 provision.go:84] configureAuth start
	I1108 10:14:08.365486  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:08.384359  474052 provision.go:143] copyHostCerts
	I1108 10:14:08.384437  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:14:08.384456  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:14:08.384530  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:14:08.384631  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:14:08.384646  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:14:08.384676  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:14:08.384731  474052 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:14:08.384740  474052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:14:08.384764  474052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:14:08.384818  474052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332573 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-332573]
	I1108 10:14:09.602789  474052 provision.go:177] copyRemoteCerts
	I1108 10:14:09.602865  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:14:09.602906  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:09.621396  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:09.729700  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:14:09.750252  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:14:09.769568  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:14:09.789190  474052 provision.go:87] duration metric: took 1.423741166s to configureAuth
	I1108 10:14:09.789214  474052 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:14:09.789436  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:09.789538  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:09.806518  474052 main.go:143] libmachine: Using SSH client type: native
	I1108 10:14:09.806824  474052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1108 10:14:09.806844  474052 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:14:10.137812  474052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:14:10.137857  474052 machine.go:97] duration metric: took 5.306563877s to provisionDockerMachine
	I1108 10:14:10.137874  474052 start.go:293] postStartSetup for "old-k8s-version-332573" (driver="docker")
	I1108 10:14:10.137887  474052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:14:10.137959  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:14:10.138020  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.164542  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.277141  474052 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:14:10.280581  474052 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:14:10.280607  474052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:14:10.280619  474052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:14:10.280680  474052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:14:10.280766  474052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:14:10.280886  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:14:10.288472  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:14:10.306162  474052 start.go:296] duration metric: took 168.269557ms for postStartSetup
	I1108 10:14:10.306310  474052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:14:10.306378  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.323448  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.426468  474052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:14:10.431158  474052 fix.go:56] duration metric: took 5.945247585s for fixHost
	I1108 10:14:10.431183  474052 start.go:83] releasing machines lock for "old-k8s-version-332573", held for 5.94530544s
	I1108 10:14:10.431306  474052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-332573
	I1108 10:14:10.447825  474052 ssh_runner.go:195] Run: cat /version.json
	I1108 10:14:10.447883  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.448153  474052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:14:10.448213  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:10.468378  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.478465  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:10.690283  474052 ssh_runner.go:195] Run: systemctl --version
	I1108 10:14:10.698314  474052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:14:10.740389  474052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:14:10.745210  474052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:14:10.745279  474052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:14:10.753989  474052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:14:10.754015  474052 start.go:496] detecting cgroup driver to use...
	I1108 10:14:10.754077  474052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:14:10.754142  474052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:14:10.769872  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:14:10.783177  474052 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:14:10.783254  474052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:14:10.799745  474052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:14:10.813275  474052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:14:10.935512  474052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:14:11.059541  474052 docker.go:234] disabling docker service ...
	I1108 10:14:11.059613  474052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:14:11.076538  474052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:14:11.091890  474052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:14:11.213572  474052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:14:11.341754  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:14:11.355789  474052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:14:11.371301  474052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:14:11.371366  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.382253  474052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:14:11.382321  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.391161  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.399551  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.408883  474052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:14:11.417487  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.426942  474052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.435641  474052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:14:11.444399  474052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:14:11.452789  474052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:14:11.460631  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:11.578853  474052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:14:11.713660  474052 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:14:11.713728  474052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:14:11.717910  474052 start.go:564] Will wait 60s for crictl version
	I1108 10:14:11.717979  474052 ssh_runner.go:195] Run: which crictl
	I1108 10:14:11.721837  474052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:14:11.751057  474052 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:14:11.751141  474052 ssh_runner.go:195] Run: crio --version
	I1108 10:14:11.780765  474052 ssh_runner.go:195] Run: crio --version
	I1108 10:14:11.811808  474052 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:14:11.814590  474052 cli_runner.go:164] Run: docker network inspect old-k8s-version-332573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:14:11.831244  474052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:14:11.835330  474052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:14:11.845365  474052 kubeadm.go:884] updating cluster {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:14:11.845481  474052 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:14:11.845540  474052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:14:11.880753  474052 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:14:11.880776  474052 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:14:11.880830  474052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:14:11.907119  474052 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:14:11.907145  474052 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:14:11.907158  474052 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:14:11.907262  474052 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-332573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:14:11.907351  474052 ssh_runner.go:195] Run: crio config
	I1108 10:14:11.965256  474052 cni.go:84] Creating CNI manager for ""
	I1108 10:14:11.965281  474052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:14:11.965303  474052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:14:11.965328  474052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332573 NodeName:old-k8s-version-332573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:14:11.965476  474052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-332573"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:14:11.965554  474052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:14:11.974486  474052 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:14:11.974562  474052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:14:11.982261  474052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:14:11.995216  474052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:14:12.017153  474052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:14:12.032568  474052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:14:12.036216  474052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:14:12.046681  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:12.171928  474052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:14:12.188259  474052 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573 for IP: 192.168.85.2
	I1108 10:14:12.188282  474052 certs.go:195] generating shared ca certs ...
	I1108 10:14:12.188298  474052 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:12.188438  474052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:14:12.188488  474052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:14:12.188500  474052 certs.go:257] generating profile certs ...
	I1108 10:14:12.188585  474052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.key
	I1108 10:14:12.188659  474052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key.99f33f23
	I1108 10:14:12.188699  474052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key
	I1108 10:14:12.188825  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:14:12.188858  474052 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:14:12.188869  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:14:12.188891  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:14:12.188950  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:14:12.188978  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:14:12.189028  474052 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:14:12.189677  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:14:12.215480  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:14:12.236811  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:14:12.259035  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:14:12.282952  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:14:12.306516  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:14:12.331377  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:14:12.358653  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:14:12.377464  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:14:12.404500  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:14:12.427117  474052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:14:12.448977  474052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:14:12.464417  474052 ssh_runner.go:195] Run: openssl version
	I1108 10:14:12.472830  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:14:12.483101  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.492586  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.492659  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:14:12.547339  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:14:12.556711  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:14:12.566040  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.569859  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.569927  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:14:12.611983  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:14:12.620940  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:14:12.630317  474052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.634620  474052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.634745  474052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:14:12.676108  474052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:14:12.684352  474052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:14:12.688507  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:14:12.731343  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:14:12.772953  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:14:12.822239  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:14:12.875432  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:14:12.949967  474052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:14:13.016399  474052 kubeadm.go:401] StartCluster: {Name:old-k8s-version-332573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-332573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:14:13.016528  474052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:14:13.016614  474052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:14:13.082003  474052 cri.go:89] found id: "da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839"
	I1108 10:14:13.082080  474052 cri.go:89] found id: "c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089"
	I1108 10:14:13.082100  474052 cri.go:89] found id: "ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89"
	I1108 10:14:13.082123  474052 cri.go:89] found id: "f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916"
	I1108 10:14:13.082144  474052 cri.go:89] found id: ""
	I1108 10:14:13.082219  474052 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:14:13.107910  474052 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:14:13Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:14:13.108041  474052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:14:13.117768  474052 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:14:13.117840  474052 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:14:13.117905  474052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:14:13.132456  474052 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:14:13.133209  474052 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-332573" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:13.133532  474052 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-332573" cluster setting kubeconfig missing "old-k8s-version-332573" context setting]
	I1108 10:14:13.134065  474052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.135512  474052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:14:13.146223  474052 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:14:13.146299  474052 kubeadm.go:602] duration metric: took 28.438782ms to restartPrimaryControlPlane
	I1108 10:14:13.146323  474052 kubeadm.go:403] duration metric: took 129.934776ms to StartCluster
	I1108 10:14:13.146359  474052 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.146438  474052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:14:13.147404  474052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:14:13.147674  474052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:14:13.148113  474052 config.go:182] Loaded profile config "old-k8s-version-332573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:14:13.148164  474052 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:14:13.148332  474052 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-332573"
	I1108 10:14:13.148359  474052 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-332573"
	W1108 10:14:13.148427  474052 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:14:13.148392  474052 addons.go:70] Setting dashboard=true in profile "old-k8s-version-332573"
	I1108 10:14:13.148485  474052 addons.go:239] Setting addon dashboard=true in "old-k8s-version-332573"
	W1108 10:14:13.148491  474052 addons.go:248] addon dashboard should already be in state true
	I1108 10:14:13.148507  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.149290  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.149486  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.149922  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.148401  474052 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-332573"
	I1108 10:14:13.150497  474052 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332573"
	I1108 10:14:13.150754  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.152248  474052 out.go:179] * Verifying Kubernetes components...
	I1108 10:14:13.164326  474052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:14:13.209441  474052 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:14:13.213601  474052 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:14:13.213728  474052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:14:13.218260  474052 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:14:13.218282  474052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:14:13.218347  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.218549  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:14:13.218563  474052 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:14:13.218605  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.237410  474052 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-332573"
	W1108 10:14:13.237431  474052 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:14:13.237456  474052 host.go:66] Checking if "old-k8s-version-332573" exists ...
	I1108 10:14:13.237891  474052 cli_runner.go:164] Run: docker container inspect old-k8s-version-332573 --format={{.State.Status}}
	I1108 10:14:13.290116  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.293923  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.311153  474052 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:14:13.311174  474052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:14:13.311244  474052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-332573
	I1108 10:14:13.344476  474052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/old-k8s-version-332573/id_rsa Username:docker}
	I1108 10:14:13.495341  474052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:14:13.530581  474052 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:14:13.560054  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:14:13.560122  474052 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:14:13.585310  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:14:13.589409  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:14:13.589482  474052 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:14:13.633880  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:14:13.640428  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:14:13.640456  474052 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:14:13.665509  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:14:13.665535  474052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:14:13.745090  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:14:13.745117  474052 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:14:13.804665  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:14:13.804699  474052 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:14:13.901942  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:14:13.901969  474052 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:14:13.959664  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:14:13.959690  474052 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:14:13.977119  474052 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:14:13.977148  474052 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:14:14.002207  474052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:14:17.847834  474052 node_ready.go:49] node "old-k8s-version-332573" is "Ready"
	I1108 10:14:17.847866  474052 node_ready.go:38] duration metric: took 4.317199138s for node "old-k8s-version-332573" to be "Ready" ...
	I1108 10:14:17.847880  474052 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:14:17.847938  474052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:14:19.367118  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.781724066s)
	I1108 10:14:19.367181  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.733279822s)
	I1108 10:14:20.058955  474052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.056698155s)
	I1108 10:14:20.059191  474052 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.211235585s)
	I1108 10:14:20.059226  474052 api_server.go:72] duration metric: took 6.911485086s to wait for apiserver process to appear ...
	I1108 10:14:20.059236  474052 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:14:20.059256  474052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:14:20.062102  474052 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-332573 addons enable metrics-server
	
	I1108 10:14:20.065097  474052 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1108 10:14:20.068035  474052 addons.go:515] duration metric: took 6.919857105s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1108 10:14:20.069002  474052 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:14:20.070466  474052 api_server.go:141] control plane version: v1.28.0
	I1108 10:14:20.070512  474052 api_server.go:131] duration metric: took 11.26885ms to wait for apiserver health ...
	I1108 10:14:20.070522  474052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:14:20.074729  474052 system_pods.go:59] 8 kube-system pods found
	I1108 10:14:20.074768  474052 system_pods.go:61] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:14:20.074781  474052 system_pods.go:61] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:14:20.074789  474052 system_pods.go:61] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:14:20.074797  474052 system_pods.go:61] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:14:20.074805  474052 system_pods.go:61] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:14:20.074814  474052 system_pods.go:61] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:14:20.074822  474052 system_pods.go:61] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:14:20.074830  474052 system_pods.go:61] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Running
	I1108 10:14:20.074836  474052 system_pods.go:74] duration metric: took 4.308623ms to wait for pod list to return data ...
	I1108 10:14:20.074845  474052 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:14:20.078373  474052 default_sa.go:45] found service account: "default"
	I1108 10:14:20.078402  474052 default_sa.go:55] duration metric: took 3.543022ms for default service account to be created ...
	I1108 10:14:20.078416  474052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:14:20.082460  474052 system_pods.go:86] 8 kube-system pods found
	I1108 10:14:20.082494  474052 system_pods.go:89] "coredns-5dd5756b68-4s446" [c1b3815e-fae2-49ce-acba-3dcfc39bf058] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:14:20.082505  474052 system_pods.go:89] "etcd-old-k8s-version-332573" [b855be33-a819-4bd8-9e31-be26c9e843e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:14:20.082512  474052 system_pods.go:89] "kindnet-qg5t6" [2634489a-0805-4e5b-9e11-39bd98299cf9] Running
	I1108 10:14:20.082520  474052 system_pods.go:89] "kube-apiserver-old-k8s-version-332573" [b25c39ce-517c-4d33-873c-575fe2c80ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:14:20.082531  474052 system_pods.go:89] "kube-controller-manager-old-k8s-version-332573" [685d9867-beed-40dc-a7a5-3f857be0bb2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:14:20.082545  474052 system_pods.go:89] "kube-proxy-bn8tb" [9983ee1d-1280-460a-8b5e-183f0cd5fc26] Running
	I1108 10:14:20.082552  474052 system_pods.go:89] "kube-scheduler-old-k8s-version-332573" [28320e9b-dcc2-4890-8700-2872645808e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:14:20.082557  474052 system_pods.go:89] "storage-provisioner" [3942a7b8-f620-491e-8fdf-5ff17477030f] Running
	I1108 10:14:20.082574  474052 system_pods.go:126] duration metric: took 4.145037ms to wait for k8s-apps to be running ...
	I1108 10:14:20.082589  474052 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:14:20.082650  474052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:14:20.101654  474052 system_svc.go:56] duration metric: took 19.053869ms WaitForService to wait for kubelet
	I1108 10:14:20.101693  474052 kubeadm.go:587] duration metric: took 6.953959531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:14:20.101714  474052 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:14:20.106013  474052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:14:20.106052  474052 node_conditions.go:123] node cpu capacity is 2
	I1108 10:14:20.106065  474052 node_conditions.go:105] duration metric: took 4.343848ms to run NodePressure ...
	I1108 10:14:20.106078  474052 start.go:242] waiting for startup goroutines ...
	I1108 10:14:20.106085  474052 start.go:247] waiting for cluster config update ...
	I1108 10:14:20.106096  474052 start.go:256] writing updated cluster config ...
	I1108 10:14:20.106401  474052 ssh_runner.go:195] Run: rm -f paused
	I1108 10:14:20.110584  474052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:14:20.115192  474052 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:14:22.120444  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:24.121246  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:26.621769  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:29.121676  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:31.621294  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:34.121979  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:36.126200  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:38.621478  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:40.623175  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:42.626669  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:45.126003  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:47.620570  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:49.625381  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:52.121354  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	W1108 10:14:54.621216  474052 pod_ready.go:104] pod "coredns-5dd5756b68-4s446" is not "Ready", error: <nil>
	I1108 10:14:56.621538  474052 pod_ready.go:94] pod "coredns-5dd5756b68-4s446" is "Ready"
	I1108 10:14:56.621571  474052 pod_ready.go:86] duration metric: took 36.50634954s for pod "coredns-5dd5756b68-4s446" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.624797  474052 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.630152  474052 pod_ready.go:94] pod "etcd-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.630184  474052 pod_ready.go:86] duration metric: took 5.353219ms for pod "etcd-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.633507  474052 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.638486  474052 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.638511  474052 pod_ready.go:86] duration metric: took 4.975181ms for pod "kube-apiserver-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.641670  474052 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:56.819001  474052 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-332573" is "Ready"
	I1108 10:14:56.819034  474052 pod_ready.go:86] duration metric: took 177.335734ms for pod "kube-controller-manager-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.019968  474052 pod_ready.go:83] waiting for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.418725  474052 pod_ready.go:94] pod "kube-proxy-bn8tb" is "Ready"
	I1108 10:14:57.418752  474052 pod_ready.go:86] duration metric: took 398.753669ms for pod "kube-proxy-bn8tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:57.619342  474052 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:58.018462  474052 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-332573" is "Ready"
	I1108 10:14:58.018493  474052 pod_ready.go:86] duration metric: took 399.12219ms for pod "kube-scheduler-old-k8s-version-332573" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:14:58.018506  474052 pod_ready.go:40] duration metric: took 37.907888318s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:14:58.072791  474052 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:14:58.075973  474052 out.go:203] 
	W1108 10:14:58.079023  474052 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:14:58.081982  474052 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:14:58.085305  474052 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-332573" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.208410196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.215987036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.216528849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.237250955Z" level=info msg="Created container 6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper" id=7ad89db2-d1e4-492a-bebf-5f40f3370c0a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.238515628Z" level=info msg="Starting container: 6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861" id=650d44a4-d67a-4e5c-9d59-694801e67d16 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.240655573Z" level=info msg="Started container" PID=1641 containerID=6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper id=650d44a4-d67a-4e5c-9d59-694801e67d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2
	Nov 08 10:14:51 old-k8s-version-332573 conmon[1639]: conmon 6494beae2f608531b20d <ninfo>: container 1641 exited with status 1
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.593148085Z" level=info msg="Removing container: 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.603433522Z" level=info msg="Error loading conmon cgroup of container 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1: cgroup deleted" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:51 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:51.60802905Z" level=info msg="Removed container 42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf/dashboard-metrics-scraper" id=55ffa854-0aaf-4dd0-9c62-a86f6853b42e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.23303839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237917858Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237955069Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.237977691Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241242091Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241279548Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.241303778Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244744188Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244783605Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.244814096Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247887135Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247920604Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.247942257Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.251352562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:14:59 old-k8s-version-332573 crio[652]: time="2025-11-08T10:14:59.251391611Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6494beae2f608       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   67be495f17939       dashboard-metrics-scraper-5f989dc9cf-pjspf       kubernetes-dashboard
	e07a28f291b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   d7ae8867c7c04       storage-provisioner                              kube-system
	f43848c6090cd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   0f53852feb7d6       kubernetes-dashboard-8694d4445c-xppkg            kubernetes-dashboard
	7678cbdbaf440       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   ce9a18b52f96f       busybox                                          default
	f35906ed98b83       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   c6a549ef5c079       coredns-5dd5756b68-4s446                         kube-system
	1a824beedc294       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   d7ae8867c7c04       storage-provisioner                              kube-system
	3005567625f32       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   641023971e7fe       kindnet-qg5t6                                    kube-system
	6faa522a3460f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   decf33fd09a54       kube-proxy-bn8tb                                 kube-system
	da1e13436901f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   5c9047851c1a4       kube-scheduler-old-k8s-version-332573            kube-system
	c4401d75cbcf9       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   9f5fbdb6abd85       kube-controller-manager-old-k8s-version-332573   kube-system
	ddf965d723cdc       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   7f4adb7008601       kube-apiserver-old-k8s-version-332573            kube-system
	f1403b9fcd37e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   4ab28b7e9f2d5       etcd-old-k8s-version-332573                      kube-system
	
	
	==> coredns [f35906ed98b83b5dffa8616e43242968b9b5736fdb970a04ad8e70d083d54e91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55534 - 46108 "HINFO IN 6170447637721567867.9205345165333730542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014643715s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-332573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-332573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-332573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_13_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-332573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:15:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:14:48 +0000   Sat, 08 Nov 2025 10:13:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-332573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d2774f32-76bc-4924-aa00-9e91907fb5f7
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-4s446                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-old-k8s-version-332573                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-qg5t6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-332573             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-332573    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-bn8tb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-332573             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pjspf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xppkg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x8 over 2m14s)  kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-332573 event: Registered Node old-k8s-version-332573 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-332573 status is now: NodeReady
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node old-k8s-version-332573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-332573 event: Registered Node old-k8s-version-332573 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:45] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:50] overlayfs: idmapped layers are currently not supported
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f1403b9fcd37ed7fa8ce4d09687e1e5c99a91bea4d445e900f4d34951698c916] <==
	{"level":"info","ts":"2025-11-08T10:14:13.447036Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:14:13.447056Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:14:13.447297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:14:13.447371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:14:13.447459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:14:13.447484Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:14:13.478902Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T10:14:13.479086Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T10:14:13.479106Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T10:14:13.479194Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:14:13.479203Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:14:14.918142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:14:14.918238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.918262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:14:14.923254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-332573 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:14:14.923302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:14:14.927015Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-08T10:14:14.92332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:14:14.932975Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:14:14.933014Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T10:14:14.964323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:15:16 up  2:57,  0 user,  load average: 2.29, 2.64, 2.26
	Linux old-k8s-version-332573 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3005567625f32fc0c3b56e4ac4331d3fa613587bca9f198558bc9da766621077] <==
	I1108 10:14:19.032834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:14:19.041224       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:14:19.041444       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:14:19.041459       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:14:19.041473       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:14:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:14:19.230830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:14:19.230849       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:14:19.230857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:14:19.231156       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:14:49.232599       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:14:49.232601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:14:49.232771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:14:49.232836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:14:50.230948       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:14:50.230995       1 metrics.go:72] Registering metrics
	I1108 10:14:50.231049       1 controller.go:711] "Syncing nftables rules"
	I1108 10:14:59.231996       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:14:59.232699       1 main.go:301] handling current node
	I1108 10:15:09.237283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:15:09.237392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ddf965d723cdc3f9815a4ca0f4c33a9935ba39f91c2f7f5f2b12cf47d8b81e89] <==
	I1108 10:14:17.821930       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 10:14:17.828291       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 10:14:17.830542       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:14:17.830677       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 10:14:17.831335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:14:17.835040       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 10:14:17.835131       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 10:14:17.837962       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:14:17.838739       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:14:17.839601       1 aggregator.go:166] initial CRD sync complete...
	I1108 10:14:17.839621       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 10:14:17.839627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:14:17.839633       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:14:17.895309       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:14:18.537324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:14:19.856270       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:14:19.900880       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:14:19.926691       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:14:19.938716       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:14:19.948449       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:14:20.023149       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.133.233"}
	I1108 10:14:20.051264       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.165.36"}
	I1108 10:14:30.223384       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:14:30.280526       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 10:14:30.672864       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c4401d75cbcf9e18223b7ce1c2681a4104ec2ca285d171c2fdc61f9eeaa9d089] <==
	I1108 10:14:30.323756       1 shared_informer.go:318] Caches are synced for persistent volume
	I1108 10:14:30.581727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="391.271218ms"
	I1108 10:14:30.581884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.485µs"
	I1108 10:14:30.586872       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-pjspf"
	I1108 10:14:30.586900       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xppkg"
	I1108 10:14:30.602383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="369.258221ms"
	I1108 10:14:30.611372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="382.224306ms"
	I1108 10:14:30.637319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.877605ms"
	I1108 10:14:30.637713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.658µs"
	I1108 10:14:30.644315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.876252ms"
	I1108 10:14:30.646412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.172µs"
	I1108 10:14:30.656594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.533µs"
	I1108 10:14:30.676324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.627µs"
	I1108 10:14:30.742149       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:14:30.769424       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:14:30.769458       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:14:36.573607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.13924ms"
	I1108 10:14:36.575498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.32µs"
	I1108 10:14:40.574252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.69µs"
	I1108 10:14:41.580100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.315µs"
	I1108 10:14:42.581749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.053µs"
	I1108 10:14:51.609268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.237µs"
	I1108 10:14:56.161077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.740406ms"
	I1108 10:14:56.161183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.262µs"
	I1108 10:15:01.221009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.863µs"
	
	
	==> kube-proxy [6faa522a3460f1fe9a0b871ab93dca9008282501f9c393d4f78de19936b855b1] <==
	I1108 10:14:19.207747       1 server_others.go:69] "Using iptables proxy"
	I1108 10:14:19.233371       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:14:19.259223       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:14:19.261360       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:14:19.261453       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:14:19.261580       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:14:19.261637       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:14:19.262043       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:14:19.262290       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:14:19.263009       1 config.go:188] "Starting service config controller"
	I1108 10:14:19.263075       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:14:19.263117       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:14:19.263142       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:14:19.265273       1 config.go:315] "Starting node config controller"
	I1108 10:14:19.266249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:14:19.364046       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 10:14:19.364096       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:14:19.366877       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da1e13436901f6a2118e84439fab747e99dc786d2425d761e4ad1fad19016839] <==
	I1108 10:14:16.264582       1 serving.go:348] Generated self-signed cert in-memory
	W1108 10:14:17.802080       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:14:17.802111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:14:17.802122       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:14:17.802129       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:14:17.861109       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 10:14:17.861144       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:14:17.864779       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 10:14:17.864890       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:14:17.864905       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:14:17.864940       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 10:14:17.969095       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.608302     776 topology_manager.go:215] "Topology Admit Handler" podUID="daa8854a-6b69-46b9-8b93-303b0882bea4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791757     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e12916d3-9c5b-4931-b373-89d06b906ff5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pjspf\" (UID: \"e12916d3-9c5b-4931-b373-89d06b906ff5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791822     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxgm\" (UniqueName: \"kubernetes.io/projected/e12916d3-9c5b-4931-b373-89d06b906ff5-kube-api-access-8bxgm\") pod \"dashboard-metrics-scraper-5f989dc9cf-pjspf\" (UID: \"e12916d3-9c5b-4931-b373-89d06b906ff5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791851     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxfh\" (UniqueName: \"kubernetes.io/projected/daa8854a-6b69-46b9-8b93-303b0882bea4-kube-api-access-xnxfh\") pod \"kubernetes-dashboard-8694d4445c-xppkg\" (UID: \"daa8854a-6b69-46b9-8b93-303b0882bea4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: I1108 10:14:30.791880     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/daa8854a-6b69-46b9-8b93-303b0882bea4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xppkg\" (UID: \"daa8854a-6b69-46b9-8b93-303b0882bea4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg"
	Nov 08 10:14:30 old-k8s-version-332573 kubelet[776]: W1108 10:14:30.938426     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc WatchSource:0}: Error finding container 0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc: Status 404 returned error can't find the container with id 0f53852feb7d6de1e124b72688f809d78ee19852aad92021b46529b67e940ecc
	Nov 08 10:14:31 old-k8s-version-332573 kubelet[776]: W1108 10:14:31.225054     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9c2d89f29f92ebfbaeb7d85255a3f13969788c28702c9772e67edcedeacc7e35/crio-67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2 WatchSource:0}: Error finding container 67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2: Status 404 returned error can't find the container with id 67be495f1793912d7a9937472751642b3a33e2a79a17062c5c68a4df6dc195a2
	Nov 08 10:14:36 old-k8s-version-332573 kubelet[776]: I1108 10:14:36.555937     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xppkg" podStartSLOduration=1.809151145 podCreationTimestamp="2025-11-08 10:14:30 +0000 UTC" firstStartedPulling="2025-11-08 10:14:30.941213049 +0000 UTC m=+18.748569452" lastFinishedPulling="2025-11-08 10:14:35.687253218 +0000 UTC m=+23.494609629" observedRunningTime="2025-11-08 10:14:36.554147574 +0000 UTC m=+24.361503976" watchObservedRunningTime="2025-11-08 10:14:36.555191322 +0000 UTC m=+24.362547733"
	Nov 08 10:14:40 old-k8s-version-332573 kubelet[776]: I1108 10:14:40.553125     776 scope.go:117] "RemoveContainer" containerID="007c2059e190983c40d17103d7876fee02e67f7461631cb93a493bfd7a392825"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: I1108 10:14:41.557299     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: E1108 10:14:41.559018     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:14:41 old-k8s-version-332573 kubelet[776]: I1108 10:14:41.559638     776 scope.go:117] "RemoveContainer" containerID="007c2059e190983c40d17103d7876fee02e67f7461631cb93a493bfd7a392825"
	Nov 08 10:14:42 old-k8s-version-332573 kubelet[776]: I1108 10:14:42.560758     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:42 old-k8s-version-332573 kubelet[776]: E1108 10:14:42.561085     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:14:49 old-k8s-version-332573 kubelet[776]: I1108 10:14:49.578236     776 scope.go:117] "RemoveContainer" containerID="1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.205115     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.587045     776 scope.go:117] "RemoveContainer" containerID="42cd5f70376e8725c5f1eea402207f902a14c983de72b814a6e93bd7c8b8cbc1"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: I1108 10:14:51.587280     776 scope.go:117] "RemoveContainer" containerID="6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	Nov 08 10:14:51 old-k8s-version-332573 kubelet[776]: E1108 10:14:51.587610     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:15:01 old-k8s-version-332573 kubelet[776]: I1108 10:15:01.205126     776 scope.go:117] "RemoveContainer" containerID="6494beae2f608531b20d93e37d8b063bacf6b249af06f57b7d1b02a5e6b6e861"
	Nov 08 10:15:01 old-k8s-version-332573 kubelet[776]: E1108 10:15:01.205456     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pjspf_kubernetes-dashboard(e12916d3-9c5b-4931-b373-89d06b906ff5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pjspf" podUID="e12916d3-9c5b-4931-b373-89d06b906ff5"
	Nov 08 10:15:10 old-k8s-version-332573 kubelet[776]: I1108 10:15:10.441227     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:15:10 old-k8s-version-332573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f43848c6090cdad59f16b76bc22d2edf9cdbf2bc870489f430dd04346773cd3c] <==
	2025/11/08 10:14:35 Starting overwatch
	2025/11/08 10:14:35 Using namespace: kubernetes-dashboard
	2025/11/08 10:14:35 Using in-cluster config to connect to apiserver
	2025/11/08 10:14:35 Using secret token for csrf signing
	2025/11/08 10:14:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:14:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:14:35 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 10:14:35 Generating JWE encryption key
	2025/11/08 10:14:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:14:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:14:36 Initializing JWE encryption key from synchronized object
	2025/11/08 10:14:36 Creating in-cluster Sidecar client
	2025/11/08 10:14:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:14:36 Serving insecurely on HTTP port: 9090
	2025/11/08 10:15:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1a824beedc294c0a61db23a182cf893538af18997f4c56b81e23ccb1987066e7] <==
	I1108 10:14:19.170120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:14:49.172674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e07a28f291b2fb58d4ce48d5496cd7dba9831b2944b34f9927c168afd4522bd7] <==
	I1108 10:14:49.631317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:14:49.644062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:14:49.644105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:15:07.050335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:15:07.050512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d!
	I1108 10:15:07.050915       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e016727-b435-4896-8e63-48348502e137", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d became leader
	I1108 10:15:07.151380       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332573_fbdd49ec-c7da-4892-86fc-e11dfb21024d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-332573 -n old-k8s-version-332573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-332573 -n old-k8s-version-332573: exit status 2 (354.767821ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-332573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (293.312679ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-872727 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-872727 describe deploy/metrics-server -n kube-system: exit status 1 (102.592387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-872727 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-872727
helpers_test.go:243: (dbg) docker inspect no-preload-872727:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	        "Created": "2025-11-08T10:15:21.269248431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477962,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:15:21.383168869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hosts",
	        "LogPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662-json.log",
	        "Name": "/no-preload-872727",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-872727:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-872727",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	                "LowerDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-872727",
	                "Source": "/var/lib/docker/volumes/no-preload-872727/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-872727",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-872727",
	                "name.minikube.sigs.k8s.io": "no-preload-872727",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7343d3c6e08abbf861b887f1f35673e62a13da8e5a5c53aa40c31ab577636682",
	            "SandboxKey": "/var/run/docker/netns/7343d3c6e08a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-872727": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:6c:aa:45:a9:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d3d5cc4896cbfd283044d2bbac6b28bc7f91508576b47e0339f3f688dde7413",
	                    "EndpointID": "78bef7f26e2f5eab2b4571fa6cbcec1bf529aa7fc04f5c27d7f369b26e76f27a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-872727",
	                        "a3d97acc3509"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-872727 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-872727 logs -n 25: (1.203142678s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-099098 sudo crio config                                                                                                                                                                                                             │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:10 UTC │                     │
	│ delete  │ -p cilium-099098                                                                                                                                                                                                                              │ cilium-099098            │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ pause   │ -p pause-585281 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │                     │
	│ delete  │ -p pause-585281                                                                                                                                                                                                                               │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ delete  │ -p force-systemd-env-000082                                                                                                                                                                                                                   │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ cert-options-916440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:15:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:15:52.618413  481559 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:15:52.618620  481559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:15:52.618633  481559 out.go:374] Setting ErrFile to fd 2...
	I1108 10:15:52.618639  481559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:15:52.618980  481559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:15:52.619434  481559 out.go:368] Setting JSON to false
	I1108 10:15:52.620380  481559 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10702,"bootTime":1762586251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:15:52.620448  481559 start.go:143] virtualization:  
	I1108 10:15:52.624344  481559 out.go:179] * [embed-certs-606645] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:15:52.628992  481559 notify.go:221] Checking for updates...
	I1108 10:15:52.632518  481559 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:15:52.636254  481559 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:15:52.639585  481559 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:15:52.642889  481559 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:15:52.646071  481559 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:15:52.649267  481559 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:15:52.653099  481559 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:15:52.653244  481559 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:15:52.694085  481559 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:15:52.694316  481559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:15:52.794288  481559 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-08 10:15:52.782868942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:15:52.794394  481559 docker.go:319] overlay module found
	I1108 10:15:52.797662  481559 out.go:179] * Using the docker driver based on user configuration
	I1108 10:15:52.800686  481559 start.go:309] selected driver: docker
	I1108 10:15:52.800701  481559 start.go:930] validating driver "docker" against <nil>
	I1108 10:15:52.800715  481559 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:15:52.801522  481559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:15:52.892025  481559 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-08 10:15:52.881640354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:15:52.892196  481559 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:15:52.892422  481559 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:15:52.895312  481559 out.go:179] * Using Docker driver with root privileges
	I1108 10:15:52.898248  481559 cni.go:84] Creating CNI manager for ""
	I1108 10:15:52.898327  481559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:15:52.898336  481559 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:15:52.898415  481559 start.go:353] cluster config:
	{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:15:52.901608  481559 out.go:179] * Starting "embed-certs-606645" primary control-plane node in "embed-certs-606645" cluster
	I1108 10:15:52.904392  481559 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:15:52.907456  481559 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:15:52.910362  481559 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:15:52.910435  481559 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:15:52.910446  481559 cache.go:59] Caching tarball of preloaded images
	I1108 10:15:52.910530  481559 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:15:52.910540  481559 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:15:52.910643  481559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:15:52.910673  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json: {Name:mk748ad8601d6726015d828d3e8994a581c7a7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:15:52.910825  481559 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:15:52.929549  481559 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:15:52.929574  481559 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:15:52.929586  481559 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:15:52.929617  481559 start.go:360] acquireMachinesLock for embed-certs-606645: {Name:mke419d0c52d844252caf31cfbe575cf42b647de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:15:52.929722  481559 start.go:364] duration metric: took 88.789µs to acquireMachinesLock for "embed-certs-606645"
	I1108 10:15:52.929746  481559 start.go:93] Provisioning new machine with config: &{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:15:52.929812  481559 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:15:50.929417  477665 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:15:51.949295  477665 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:15:52.383310  477665 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:15:52.383833  477665 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-872727] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:15:52.951984  477665 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:15:52.952126  477665 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-872727] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:15:54.687426  477665 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:15:52.933273  481559 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:15:52.933503  481559 start.go:159] libmachine.API.Create for "embed-certs-606645" (driver="docker")
	I1108 10:15:52.933547  481559 client.go:173] LocalClient.Create starting
	I1108 10:15:52.933607  481559 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:15:52.933636  481559 main.go:143] libmachine: Decoding PEM data...
	I1108 10:15:52.933654  481559 main.go:143] libmachine: Parsing certificate...
	I1108 10:15:52.933709  481559 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:15:52.933725  481559 main.go:143] libmachine: Decoding PEM data...
	I1108 10:15:52.933735  481559 main.go:143] libmachine: Parsing certificate...
	I1108 10:15:52.934106  481559 cli_runner.go:164] Run: docker network inspect embed-certs-606645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:15:52.954084  481559 cli_runner.go:211] docker network inspect embed-certs-606645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:15:52.954175  481559 network_create.go:284] running [docker network inspect embed-certs-606645] to gather additional debugging logs...
	I1108 10:15:52.954193  481559 cli_runner.go:164] Run: docker network inspect embed-certs-606645
	W1108 10:15:52.971970  481559 cli_runner.go:211] docker network inspect embed-certs-606645 returned with exit code 1
	I1108 10:15:52.972005  481559 network_create.go:287] error running [docker network inspect embed-certs-606645]: docker network inspect embed-certs-606645: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-606645 not found
	I1108 10:15:52.972018  481559 network_create.go:289] output of [docker network inspect embed-certs-606645]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-606645 not found
	
	** /stderr **
	I1108 10:15:52.972123  481559 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:15:52.990295  481559 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:15:52.990694  481559 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:15:52.990921  481559 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:15:52.991326  481559 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b9670}
	I1108 10:15:52.991343  481559 network_create.go:124] attempt to create docker network embed-certs-606645 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:15:52.991398  481559 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-606645 embed-certs-606645
	I1108 10:15:53.087389  481559 network_create.go:108] docker network embed-certs-606645 192.168.76.0/24 created
	I1108 10:15:53.087431  481559 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-606645" container
	I1108 10:15:53.087503  481559 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:15:53.113418  481559 cli_runner.go:164] Run: docker volume create embed-certs-606645 --label name.minikube.sigs.k8s.io=embed-certs-606645 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:15:53.146558  481559 oci.go:103] Successfully created a docker volume embed-certs-606645
	I1108 10:15:53.146678  481559 cli_runner.go:164] Run: docker run --rm --name embed-certs-606645-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-606645 --entrypoint /usr/bin/test -v embed-certs-606645:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:15:53.747917  481559 oci.go:107] Successfully prepared a docker volume embed-certs-606645
	I1108 10:15:53.747983  481559 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:15:53.748002  481559 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:15:53.748067  481559 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-606645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:15:55.494644  477665 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:15:55.823288  477665 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:15:55.823863  477665 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:15:56.543237  477665 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:15:56.968198  477665 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:15:58.675287  477665 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:15:59.359961  477665 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:15:59.724319  477665 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:15:59.724421  477665 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:15:59.740921  477665 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:15:59.746395  477665 out.go:252]   - Booting up control plane ...
	I1108 10:15:59.746509  477665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:15:59.746592  477665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:15:59.746662  477665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:15:59.771769  477665 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:15:59.771882  477665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:15:59.783715  477665 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:15:59.784101  477665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:15:59.784151  477665 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:15:59.949895  477665 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:15:59.950021  477665 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:15:58.409140  481559 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-606645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.661029729s)
	I1108 10:15:58.409174  481559 kic.go:203] duration metric: took 4.661168052s to extract preloaded images to volume ...
	W1108 10:15:58.409313  481559 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:15:58.409427  481559 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:15:58.506681  481559 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-606645 --name embed-certs-606645 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-606645 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-606645 --network embed-certs-606645 --ip 192.168.76.2 --volume embed-certs-606645:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:15:58.841565  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Running}}
	I1108 10:15:58.863640  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:15:58.897048  481559 cli_runner.go:164] Run: docker exec embed-certs-606645 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:15:58.967580  481559 oci.go:144] the created container "embed-certs-606645" has a running status.
	I1108 10:15:58.967607  481559 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa...
	I1108 10:15:59.142927  481559 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:15:59.180269  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:15:59.207024  481559 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:15:59.207043  481559 kic_runner.go:114] Args: [docker exec --privileged embed-certs-606645 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:15:59.274486  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:15:59.305212  481559 machine.go:94] provisionDockerMachine start ...
	I1108 10:15:59.305317  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:15:59.330207  481559 main.go:143] libmachine: Using SSH client type: native
	I1108 10:15:59.330546  481559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1108 10:15:59.330556  481559 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:15:59.332792  481559 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:16:02.480463  481559 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:16:02.480546  481559 ubuntu.go:182] provisioning hostname "embed-certs-606645"
	I1108 10:16:02.480639  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:02.497270  481559 main.go:143] libmachine: Using SSH client type: native
	I1108 10:16:02.497588  481559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1108 10:16:02.497604  481559 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-606645 && echo "embed-certs-606645" | sudo tee /etc/hostname
	I1108 10:16:02.451122  477665 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501578382s
	I1108 10:16:02.454815  477665 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:16:02.454920  477665 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1108 10:16:02.455051  477665 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:16:02.455139  477665 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:16:02.678785  481559 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:16:02.678892  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:02.704209  481559 main.go:143] libmachine: Using SSH client type: native
	I1108 10:16:02.704521  481559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1108 10:16:02.704541  481559 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606645/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:16:02.881549  481559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:16:02.881619  481559 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:16:02.881654  481559 ubuntu.go:190] setting up certificates
	I1108 10:16:02.881708  481559 provision.go:84] configureAuth start
	I1108 10:16:02.881824  481559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:16:02.906408  481559 provision.go:143] copyHostCerts
	I1108 10:16:02.906474  481559 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:16:02.906484  481559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:16:02.906563  481559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:16:02.906659  481559 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:16:02.906664  481559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:16:02.906689  481559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:16:02.906750  481559 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:16:02.906754  481559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:16:02.906777  481559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:16:02.906829  481559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606645 san=[127.0.0.1 192.168.76.2 embed-certs-606645 localhost minikube]
	I1108 10:16:03.336400  481559 provision.go:177] copyRemoteCerts
	I1108 10:16:03.336469  481559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:16:03.336526  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:03.370419  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:03.498094  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:16:03.531748  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:16:03.560293  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 10:16:03.596359  481559 provision.go:87] duration metric: took 714.608794ms to configureAuth
	I1108 10:16:03.596388  481559 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:16:03.596576  481559 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:16:03.596682  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:03.621124  481559 main.go:143] libmachine: Using SSH client type: native
	I1108 10:16:03.621438  481559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1108 10:16:03.621453  481559 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:16:04.012563  481559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:16:04.012588  481559 machine.go:97] duration metric: took 4.707356066s to provisionDockerMachine
	I1108 10:16:04.012598  481559 client.go:176] duration metric: took 11.079044321s to LocalClient.Create
	I1108 10:16:04.012609  481559 start.go:167] duration metric: took 11.079107074s to libmachine.API.Create "embed-certs-606645"
	I1108 10:16:04.012617  481559 start.go:293] postStartSetup for "embed-certs-606645" (driver="docker")
	I1108 10:16:04.012626  481559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:16:04.012690  481559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:16:04.012730  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:04.050915  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:04.186067  481559 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:16:04.189401  481559 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:16:04.189426  481559 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:16:04.189437  481559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:16:04.189490  481559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:16:04.189567  481559 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:16:04.189665  481559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:16:04.203303  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:16:04.225526  481559 start.go:296] duration metric: took 212.886206ms for postStartSetup
	I1108 10:16:04.226018  481559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:16:04.254787  481559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:16:04.255075  481559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:16:04.255118  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:04.281487  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:04.386583  481559 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:16:04.395337  481559 start.go:128] duration metric: took 11.465509776s to createHost
	I1108 10:16:04.395359  481559 start.go:83] releasing machines lock for "embed-certs-606645", held for 11.465628776s
	I1108 10:16:04.395440  481559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:16:04.431418  481559 ssh_runner.go:195] Run: cat /version.json
	I1108 10:16:04.431471  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:04.431703  481559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:16:04.431766  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:04.469558  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:04.473293  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:04.597251  481559 ssh_runner.go:195] Run: systemctl --version
	I1108 10:16:04.721112  481559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:16:04.814668  481559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:16:04.825594  481559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:16:04.825707  481559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:16:04.865363  481559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:16:04.865434  481559 start.go:496] detecting cgroup driver to use...
	I1108 10:16:04.865480  481559 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:16:04.865559  481559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:16:04.902123  481559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:16:04.922574  481559 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:16:04.922689  481559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:16:04.942964  481559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:16:04.968408  481559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:16:05.168269  481559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:16:05.397841  481559 docker.go:234] disabling docker service ...
	I1108 10:16:05.397913  481559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:16:05.433084  481559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:16:05.462623  481559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:16:05.674809  481559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:16:05.872966  481559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:16:05.894740  481559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:16:05.919180  481559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:16:05.919250  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:05.941725  481559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:16:05.941803  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:05.952135  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:05.963507  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:05.973052  481559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:16:05.980846  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:05.995839  481559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:06.021080  481559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:16:06.038606  481559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:16:06.048894  481559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:16:06.066174  481559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:16:06.269027  481559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:16:06.481843  481559 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:16:06.481910  481559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:16:06.486525  481559 start.go:564] Will wait 60s for crictl version
	I1108 10:16:06.486586  481559 ssh_runner.go:195] Run: which crictl
	I1108 10:16:06.494950  481559 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:16:06.563003  481559 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:16:06.563156  481559 ssh_runner.go:195] Run: crio --version
	I1108 10:16:06.618531  481559 ssh_runner.go:195] Run: crio --version
	I1108 10:16:06.678469  481559 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:16:06.681418  481559 cli_runner.go:164] Run: docker network inspect embed-certs-606645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:16:06.700477  481559 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:16:06.704595  481559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:16:06.714765  481559 kubeadm.go:884] updating cluster {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:16:06.714899  481559 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:16:06.714966  481559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:16:06.795623  481559 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:16:06.795649  481559 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:16:06.795707  481559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:16:06.839113  481559 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:16:06.839135  481559 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:16:06.839143  481559 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:16:06.839226  481559 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-606645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:16:06.839319  481559 ssh_runner.go:195] Run: crio config
	I1108 10:16:06.964959  481559 cni.go:84] Creating CNI manager for ""
	I1108 10:16:06.965027  481559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:16:06.965060  481559 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:16:06.965116  481559 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606645 NodeName:embed-certs-606645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:16:06.965286  481559 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606645"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:16:06.965388  481559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:16:06.973212  481559 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:16:06.973333  481559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:16:06.980735  481559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:16:06.993302  481559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:16:07.006735  481559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:16:07.020254  481559 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:16:07.024293  481559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:16:07.033884  481559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:16:07.220554  481559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:16:07.245313  481559 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645 for IP: 192.168.76.2
	I1108 10:16:07.245386  481559 certs.go:195] generating shared ca certs ...
	I1108 10:16:07.245419  481559 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:07.245616  481559 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:16:07.245701  481559 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:16:07.245728  481559 certs.go:257] generating profile certs ...
	I1108 10:16:07.245827  481559 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.key
	I1108 10:16:07.245871  481559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.crt with IP's: []
	I1108 10:16:07.972646  481559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.crt ...
	I1108 10:16:07.972721  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.crt: {Name:mk8914f7ebf461e419b3ee9251e8b962c4fef8d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:07.972978  481559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.key ...
	I1108 10:16:07.973013  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.key: {Name:mkb25a21499ec6565e23a67bc62d4de984352e3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:07.973177  481559 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e
	I1108 10:16:07.973217  481559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt.9e91513e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:16:08.150426  481559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt.9e91513e ...
	I1108 10:16:08.150496  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt.9e91513e: {Name:mkbadb28fa054c58feb5494170b47dfd789389c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:08.150733  481559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e ...
	I1108 10:16:08.150771  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e: {Name:mk0c768b5e3ac10d2de673287ae12fae0e9c775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:08.150928  481559 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt.9e91513e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt
	I1108 10:16:08.151062  481559 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key
	I1108 10:16:08.151151  481559 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key
	I1108 10:16:08.151199  481559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt with IP's: []
	I1108 10:16:08.930353  481559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt ...
	I1108 10:16:08.930428  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt: {Name:mk9df800c2e17196c76a81d7840e90a94b981628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:08.930675  481559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key ...
	I1108 10:16:08.930716  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key: {Name:mk60193ef94fc73b99506e2653262e4044cfd591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:08.931006  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:16:08.931092  481559 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:16:08.931122  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:16:08.931183  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:16:08.931244  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:16:08.931290  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:16:08.931369  481559 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:16:08.932045  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:16:08.951730  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:16:08.977574  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:16:09.004281  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:16:09.025488  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:16:09.045841  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:16:09.072576  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:16:09.093914  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:16:09.122092  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:16:09.152669  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:16:09.175536  481559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:16:09.196555  481559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:16:09.212819  481559 ssh_runner.go:195] Run: openssl version
	I1108 10:16:09.232337  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:16:09.249994  481559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:16:09.255179  481559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:16:09.255293  481559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:16:09.335248  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:16:09.344975  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:16:09.353714  481559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:16:09.357887  481559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:16:09.357968  481559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:16:09.401508  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:16:09.410027  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:16:09.418144  481559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:16:09.421999  481559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:16:09.422065  481559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:16:09.463896  481559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:16:09.472598  481559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:16:09.477027  481559 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:16:09.477081  481559 kubeadm.go:401] StartCluster: {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:16:09.477158  481559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:16:09.477232  481559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:16:09.506151  481559 cri.go:89] found id: ""
	I1108 10:16:09.506221  481559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:16:09.515268  481559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:16:09.523282  481559 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:16:09.523351  481559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:16:09.532734  481559 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:16:09.532756  481559 kubeadm.go:158] found existing configuration files:
	
	I1108 10:16:09.532805  481559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:16:09.541031  481559 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:16:09.541094  481559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:16:09.548414  481559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:16:09.556849  481559 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:16:09.556965  481559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:16:09.564327  481559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:16:09.574297  481559 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:16:09.574362  481559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:16:09.582604  481559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:16:09.592567  481559 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:16:09.592682  481559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:16:09.600716  481559 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:16:09.649218  481559 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:16:09.649461  481559 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:16:09.693418  481559 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:16:09.693573  481559 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:16:09.693652  481559 kubeadm.go:319] OS: Linux
	I1108 10:16:09.693757  481559 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:16:09.693842  481559 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:16:09.693926  481559 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:16:09.694012  481559 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:16:09.694098  481559 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:16:09.694185  481559 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:16:09.694267  481559 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:16:09.694352  481559 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:16:09.694436  481559 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:16:09.778173  481559 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:16:09.778360  481559 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:16:09.778505  481559 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:16:09.789331  481559 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:16:07.204431  477665 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.748632552s
	I1108 10:16:08.738158  477665 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.283314281s
	I1108 10:16:10.957847  477665 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502936735s
	I1108 10:16:10.983722  477665 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:16:11.001279  477665 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:16:11.022525  477665 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:16:11.022736  477665 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-872727 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:16:11.040086  477665 kubeadm.go:319] [bootstrap-token] Using token: a2f7nn.agfo2ajvw819iuhd
	I1108 10:16:11.043022  477665 out.go:252]   - Configuring RBAC rules ...
	I1108 10:16:11.043148  477665 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:16:11.057234  477665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:16:11.064991  477665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:16:11.073331  477665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:16:11.074157  477665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:16:11.079388  477665 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:16:11.366529  477665 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:16:11.826554  477665 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:16:12.366026  477665 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:16:12.366977  477665 kubeadm.go:319] 
	I1108 10:16:12.367056  477665 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:16:12.367063  477665 kubeadm.go:319] 
	I1108 10:16:12.367144  477665 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:16:12.367149  477665 kubeadm.go:319] 
	I1108 10:16:12.367175  477665 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:16:12.367237  477665 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:16:12.367289  477665 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:16:12.367294  477665 kubeadm.go:319] 
	I1108 10:16:12.367351  477665 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:16:12.367360  477665 kubeadm.go:319] 
	I1108 10:16:12.367410  477665 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:16:12.367415  477665 kubeadm.go:319] 
	I1108 10:16:12.367469  477665 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:16:12.367555  477665 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:16:12.367642  477665 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:16:12.367647  477665 kubeadm.go:319] 
	I1108 10:16:12.367735  477665 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:16:12.367815  477665 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:16:12.367820  477665 kubeadm.go:319] 
	I1108 10:16:12.367908  477665 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a2f7nn.agfo2ajvw819iuhd \
	I1108 10:16:12.368016  477665 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:16:12.368038  477665 kubeadm.go:319] 	--control-plane 
	I1108 10:16:12.368042  477665 kubeadm.go:319] 
	I1108 10:16:12.368132  477665 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:16:12.368136  477665 kubeadm.go:319] 
	I1108 10:16:12.368428  477665 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a2f7nn.agfo2ajvw819iuhd \
	I1108 10:16:12.368542  477665 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:16:12.377529  477665 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:16:12.377775  477665 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:16:12.377894  477665 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:16:12.377911  477665 cni.go:84] Creating CNI manager for ""
	I1108 10:16:12.377918  477665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:16:12.381074  477665 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:16:09.795097  481559 out.go:252]   - Generating certificates and keys ...
	I1108 10:16:09.795204  481559 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:16:09.795282  481559 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:16:10.661040  481559 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:16:12.022222  481559 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:16:12.405357  481559 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:16:12.384044  477665 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:16:12.389539  477665 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:16:12.389563  477665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:16:12.413150  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:16:12.852250  477665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:16:12.852377  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:12.852440  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-872727 minikube.k8s.io/updated_at=2025_11_08T10_16_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=no-preload-872727 minikube.k8s.io/primary=true
	I1108 10:16:13.125097  477665 ops.go:34] apiserver oom_adj: -16
	I1108 10:16:13.125202  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:13.625304  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:14.125300  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:14.625306  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:15.125283  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:15.625329  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:16.126133  477665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:16.327494  477665 kubeadm.go:1114] duration metric: took 3.475161628s to wait for elevateKubeSystemPrivileges
	I1108 10:16:16.327518  477665 kubeadm.go:403] duration metric: took 29.011860056s to StartCluster
	I1108 10:16:16.327534  477665 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:16.327596  477665 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:16:16.328299  477665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:16.328494  477665 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:16:16.328595  477665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:16:16.328831  477665 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:16:16.328867  477665 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:16:16.328962  477665 addons.go:70] Setting storage-provisioner=true in profile "no-preload-872727"
	I1108 10:16:16.328977  477665 addons.go:239] Setting addon storage-provisioner=true in "no-preload-872727"
	I1108 10:16:16.328999  477665 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:16:16.329490  477665 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:16:16.329999  477665 addons.go:70] Setting default-storageclass=true in profile "no-preload-872727"
	I1108 10:16:16.330027  477665 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-872727"
	I1108 10:16:16.330299  477665 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:16:16.335216  477665 out.go:179] * Verifying Kubernetes components...
	I1108 10:16:16.338983  477665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:16:16.367430  477665 addons.go:239] Setting addon default-storageclass=true in "no-preload-872727"
	I1108 10:16:16.367471  477665 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:16:16.367886  477665 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:16:16.370040  477665 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:16:13.073320  481559 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:16:13.456127  481559 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:16:13.456462  481559 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-606645 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:16:14.332787  481559 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:16:14.336414  481559 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-606645 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:16:14.455318  481559 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:16:15.094376  481559 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:16:15.444738  481559 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:16:15.445080  481559 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:16:15.595891  481559 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:16:15.910359  481559 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:16:16.261633  481559 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:16:17.227596  481559 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:16:18.029357  481559 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:16:18.029459  481559 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:16:18.033615  481559 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:16:16.372797  477665 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:16:16.372816  477665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:16:16.372879  477665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:16:16.401851  477665 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:16:16.401872  477665 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:16:16.401936  477665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:16:16.425818  477665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:16:16.444241  477665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:16:16.897547  477665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:16:16.947068  477665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:16:16.962846  477665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:16:16.963037  477665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:16:18.453062  477665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.555429146s)
	I1108 10:16:18.453168  477665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.506029265s)
	I1108 10:16:18.453576  477665 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.490491289s)
	I1108 10:16:18.454948  477665 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.49202369s)
	I1108 10:16:18.454983  477665 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:16:18.457280  477665 node_ready.go:35] waiting up to 6m0s for node "no-preload-872727" to be "Ready" ...
	I1108 10:16:18.605874  477665 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:16:18.608771  477665 addons.go:515] duration metric: took 2.279867571s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:16:18.966126  477665 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-872727" context rescaled to 1 replicas
	I1108 10:16:18.037318  481559 out.go:252]   - Booting up control plane ...
	I1108 10:16:18.037436  481559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:16:18.037518  481559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:16:18.038867  481559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:16:18.063740  481559 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:16:18.064087  481559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:16:18.079970  481559 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:16:18.080322  481559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:16:18.080371  481559 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:16:18.307815  481559 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:16:18.307940  481559 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:16:19.807978  481559 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501312997s
	I1108 10:16:19.818264  481559 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:16:19.818609  481559 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:16:19.818710  481559 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:16:19.818793  481559 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:16:20.465578  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	W1108 10:16:22.960951  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	I1108 10:16:24.828182  481559 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.009129674s
	I1108 10:16:25.696996  481559 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.87862104s
	I1108 10:16:27.323209  481559 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.504590568s
	I1108 10:16:27.352115  481559 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:16:27.376697  481559 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:16:27.401602  481559 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:16:27.401845  481559 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-606645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:16:27.423175  481559 kubeadm.go:319] [bootstrap-token] Using token: j0wr7b.f0nqppt9vzpk5vwj
	I1108 10:16:27.426169  481559 out.go:252]   - Configuring RBAC rules ...
	I1108 10:16:27.426304  481559 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:16:27.434004  481559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:16:27.445582  481559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:16:27.450142  481559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:16:27.455560  481559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:16:27.464438  481559 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:16:27.732613  481559 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:16:28.174240  481559 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:16:28.730834  481559 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:16:28.732051  481559 kubeadm.go:319] 
	I1108 10:16:28.732122  481559 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:16:28.732128  481559 kubeadm.go:319] 
	I1108 10:16:28.732204  481559 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:16:28.732209  481559 kubeadm.go:319] 
	I1108 10:16:28.732243  481559 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:16:28.732302  481559 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:16:28.732351  481559 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:16:28.732362  481559 kubeadm.go:319] 
	I1108 10:16:28.732416  481559 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:16:28.732420  481559 kubeadm.go:319] 
	I1108 10:16:28.732466  481559 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:16:28.732471  481559 kubeadm.go:319] 
	I1108 10:16:28.732522  481559 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:16:28.732596  481559 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:16:28.732664  481559 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:16:28.732668  481559 kubeadm.go:319] 
	I1108 10:16:28.732752  481559 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:16:28.732828  481559 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:16:28.732832  481559 kubeadm.go:319] 
	I1108 10:16:28.732932  481559 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j0wr7b.f0nqppt9vzpk5vwj \
	I1108 10:16:28.733035  481559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:16:28.733056  481559 kubeadm.go:319] 	--control-plane 
	I1108 10:16:28.733061  481559 kubeadm.go:319] 
	I1108 10:16:28.733144  481559 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:16:28.733149  481559 kubeadm.go:319] 
	I1108 10:16:28.733230  481559 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j0wr7b.f0nqppt9vzpk5vwj \
	I1108 10:16:28.733330  481559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:16:28.738416  481559 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:16:28.738651  481559 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:16:28.738761  481559 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:16:28.738776  481559 cni.go:84] Creating CNI manager for ""
	I1108 10:16:28.738784  481559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:16:28.742001  481559 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 10:16:25.460948  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	W1108 10:16:27.960575  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	W1108 10:16:29.960768  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	I1108 10:16:28.744819  481559 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:16:28.750623  481559 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:16:28.750687  481559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:16:28.767128  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:16:29.487351  481559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:16:29.487491  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:29.487556  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606645 minikube.k8s.io/updated_at=2025_11_08T10_16_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=embed-certs-606645 minikube.k8s.io/primary=true
	I1108 10:16:29.665662  481559 ops.go:34] apiserver oom_adj: -16
	I1108 10:16:29.665783  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:30.166588  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:30.666130  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:31.166196  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:31.666181  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:32.166751  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:32.666427  481559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:16:32.788053  481559 kubeadm.go:1114] duration metric: took 3.300603404s to wait for elevateKubeSystemPrivileges
	I1108 10:16:32.788085  481559 kubeadm.go:403] duration metric: took 23.311009069s to StartCluster
	I1108 10:16:32.788103  481559 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:32.788164  481559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:16:32.789517  481559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:16:32.789758  481559 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:16:32.789848  481559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:16:32.790093  481559 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:16:32.790133  481559 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:16:32.790197  481559 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-606645"
	I1108 10:16:32.790216  481559 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-606645"
	I1108 10:16:32.790242  481559 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:16:32.790918  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:16:32.791218  481559 addons.go:70] Setting default-storageclass=true in profile "embed-certs-606645"
	I1108 10:16:32.791242  481559 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606645"
	I1108 10:16:32.791497  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:16:32.792821  481559 out.go:179] * Verifying Kubernetes components...
	I1108 10:16:32.796012  481559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:16:32.829170  481559 addons.go:239] Setting addon default-storageclass=true in "embed-certs-606645"
	I1108 10:16:32.829215  481559 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:16:32.829666  481559 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:16:32.853382  481559 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:16:32.858778  481559 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:16:32.858806  481559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:16:32.858870  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:32.872474  481559 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:16:32.872497  481559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:16:32.872565  481559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:16:32.918731  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:32.936660  481559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:16:33.257423  481559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:16:33.258137  481559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:16:33.258810  481559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:16:33.346530  481559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:16:33.428703  481559 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:16:34.146265  481559 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:16:34.410808  481559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.064217093s)
	I1108 10:16:34.411137  481559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.152302996s)
	I1108 10:16:34.423791  481559 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1108 10:16:32.460622  477665 node_ready.go:57] node "no-preload-872727" has "Ready":"False" status (will retry)
	I1108 10:16:32.960676  477665 node_ready.go:49] node "no-preload-872727" is "Ready"
	I1108 10:16:32.960708  477665 node_ready.go:38] duration metric: took 14.503400424s for node "no-preload-872727" to be "Ready" ...
	I1108 10:16:32.960722  477665 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:16:32.960781  477665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:16:32.986778  477665 api_server.go:72] duration metric: took 16.658254457s to wait for apiserver process to appear ...
	I1108 10:16:32.986803  477665 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:16:32.986823  477665 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:16:32.998271  477665 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:16:32.999491  477665 api_server.go:141] control plane version: v1.34.1
	I1108 10:16:32.999522  477665 api_server.go:131] duration metric: took 12.708632ms to wait for apiserver health ...
	I1108 10:16:32.999532  477665 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:16:33.006046  477665 system_pods.go:59] 8 kube-system pods found
	I1108 10:16:33.006087  477665 system_pods.go:61] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:16:33.006096  477665 system_pods.go:61] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:33.006103  477665 system_pods.go:61] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:33.006108  477665 system_pods.go:61] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:33.006114  477665 system_pods.go:61] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:33.006119  477665 system_pods.go:61] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:33.006124  477665 system_pods.go:61] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:33.006135  477665 system_pods.go:61] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:16:33.006141  477665 system_pods.go:74] duration metric: took 6.602507ms to wait for pod list to return data ...
	I1108 10:16:33.006151  477665 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:16:33.009848  477665 default_sa.go:45] found service account: "default"
	I1108 10:16:33.009873  477665 default_sa.go:55] duration metric: took 3.717002ms for default service account to be created ...
	I1108 10:16:33.009884  477665 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:16:33.016836  477665 system_pods.go:86] 8 kube-system pods found
	I1108 10:16:33.016879  477665 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:16:33.016886  477665 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:33.016896  477665 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:33.016902  477665 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:33.016945  477665 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:33.016957  477665 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:33.016962  477665 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:33.016969  477665 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:16:33.016998  477665 retry.go:31] will retry after 263.401072ms: missing components: kube-dns
	I1108 10:16:33.288169  477665 system_pods.go:86] 8 kube-system pods found
	I1108 10:16:33.288249  477665 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:16:33.288272  477665 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:33.288296  477665 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:33.288337  477665 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:33.288356  477665 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:33.288376  477665 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:33.288411  477665 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:33.288438  477665 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:16:33.288602  477665 retry.go:31] will retry after 370.708436ms: missing components: kube-dns
	I1108 10:16:33.664596  477665 system_pods.go:86] 8 kube-system pods found
	I1108 10:16:33.664633  477665 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:16:33.664640  477665 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:33.664648  477665 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:33.664653  477665 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:33.664659  477665 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:33.664663  477665 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:33.664667  477665 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:33.664678  477665 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:16:33.664692  477665 retry.go:31] will retry after 482.353033ms: missing components: kube-dns
	I1108 10:16:34.151726  477665 system_pods.go:86] 8 kube-system pods found
	I1108 10:16:34.151762  477665 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:16:34.151769  477665 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:34.151775  477665 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:34.151781  477665 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:34.151786  477665 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:34.151790  477665 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:34.151794  477665 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:34.151800  477665 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:16:34.151817  477665 retry.go:31] will retry after 520.487732ms: missing components: kube-dns
	I1108 10:16:34.676100  477665 system_pods.go:86] 8 kube-system pods found
	I1108 10:16:34.676147  477665 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Running
	I1108 10:16:34.676189  477665 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running
	I1108 10:16:34.676196  477665 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:16:34.676202  477665 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running
	I1108 10:16:34.676213  477665 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running
	I1108 10:16:34.676219  477665 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:16:34.676228  477665 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running
	I1108 10:16:34.676240  477665 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Running
	I1108 10:16:34.676252  477665 system_pods.go:126] duration metric: took 1.666361683s to wait for k8s-apps to be running ...
	I1108 10:16:34.676260  477665 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:16:34.676324  477665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:16:34.698000  477665 system_svc.go:56] duration metric: took 21.729431ms WaitForService to wait for kubelet
	I1108 10:16:34.698079  477665 kubeadm.go:587] duration metric: took 18.369560493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:16:34.698107  477665 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:16:34.701109  477665 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:16:34.701150  477665 node_conditions.go:123] node cpu capacity is 2
	I1108 10:16:34.701165  477665 node_conditions.go:105] duration metric: took 3.051159ms to run NodePressure ...
	I1108 10:16:34.701179  477665 start.go:242] waiting for startup goroutines ...
	I1108 10:16:34.701189  477665 start.go:247] waiting for cluster config update ...
	I1108 10:16:34.701204  477665 start.go:256] writing updated cluster config ...
	I1108 10:16:34.701530  477665 ssh_runner.go:195] Run: rm -f paused
	I1108 10:16:34.707063  477665 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:16:34.713505  477665 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xnlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.724725  477665 pod_ready.go:94] pod "coredns-66bc5c9577-7xnlf" is "Ready"
	I1108 10:16:34.724800  477665 pod_ready.go:86] duration metric: took 11.208558ms for pod "coredns-66bc5c9577-7xnlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.731763  477665 pod_ready.go:83] waiting for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.744135  477665 pod_ready.go:94] pod "etcd-no-preload-872727" is "Ready"
	I1108 10:16:34.744215  477665 pod_ready.go:86] duration metric: took 12.42099ms for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.754528  477665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.762526  477665 pod_ready.go:94] pod "kube-apiserver-no-preload-872727" is "Ready"
	I1108 10:16:34.762552  477665 pod_ready.go:86] duration metric: took 7.948971ms for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:34.774859  477665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:35.112714  477665 pod_ready.go:94] pod "kube-controller-manager-no-preload-872727" is "Ready"
	I1108 10:16:35.112792  477665 pod_ready.go:86] duration metric: took 337.90363ms for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:35.311797  477665 pod_ready.go:83] waiting for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:35.711978  477665 pod_ready.go:94] pod "kube-proxy-tl7z2" is "Ready"
	I1108 10:16:35.712004  477665 pod_ready.go:86] duration metric: took 400.181377ms for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:35.912451  477665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:36.311817  477665 pod_ready.go:94] pod "kube-scheduler-no-preload-872727" is "Ready"
	I1108 10:16:36.311897  477665 pod_ready.go:86] duration metric: took 399.416255ms for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:16:36.311917  477665 pod_ready.go:40] duration metric: took 1.604768407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:16:36.364361  477665 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:16:36.367554  477665 out.go:179] * Done! kubectl is now configured to use "no-preload-872727" cluster and "default" namespace by default
	I1108 10:16:34.426661  481559 addons.go:515] duration metric: took 1.636509212s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:16:34.650415  481559 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-606645" context rescaled to 1 replicas
	W1108 10:16:35.432250  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	W1108 10:16:37.932180  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	W1108 10:16:40.432478  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 10:16:33 no-preload-872727 crio[838]: time="2025-11-08T10:16:33.311099517Z" level=info msg="Created container 5af38437b610210204df3e03a1efe0cb72464fea5486b1ca0aa5d94a29b617a8: kube-system/coredns-66bc5c9577-7xnlf/coredns" id=3d2ed040-4f0a-40cb-884c-9e1571d39ad5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:16:33 no-preload-872727 crio[838]: time="2025-11-08T10:16:33.314453144Z" level=info msg="Starting container: 5af38437b610210204df3e03a1efe0cb72464fea5486b1ca0aa5d94a29b617a8" id=79e5d81e-8e6c-4c9c-a3bb-421d1ac254bd name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:16:33 no-preload-872727 crio[838]: time="2025-11-08T10:16:33.318280621Z" level=info msg="Started container" PID=2500 containerID=5af38437b610210204df3e03a1efe0cb72464fea5486b1ca0aa5d94a29b617a8 description=kube-system/coredns-66bc5c9577-7xnlf/coredns id=79e5d81e-8e6c-4c9c-a3bb-421d1ac254bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bdd0037874ef4c134052e98771ed1f0d1ff185f4ec03f4050b42a595e363a41
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.178541165Z" level=info msg="Running pod sandbox: default/busybox/POD" id=127e06d5-658c-4a4c-91b9-a6a024143c6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.178618245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.183693551Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255 UID:f23722ee-2a7d-4548-b3a6-705dd0782670 NetNS:/var/run/netns/c86b4e26-9910-4cd6-a9e9-cff5231e785d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015ee9f8}] Aliases:map[]}"
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.183729572Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.194707802Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255 UID:f23722ee-2a7d-4548-b3a6-705dd0782670 NetNS:/var/run/netns/c86b4e26-9910-4cd6-a9e9-cff5231e785d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015ee9f8}] Aliases:map[]}"
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.195025639Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.200168753Z" level=info msg="Ran pod sandbox 9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255 with infra container: default/busybox/POD" id=127e06d5-658c-4a4c-91b9-a6a024143c6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.201470885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e53f6a9-0f92-42c6-b01a-f4f3455f10f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.201711379Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6e53f6a9-0f92-42c6-b01a-f4f3455f10f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.201767896Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6e53f6a9-0f92-42c6-b01a-f4f3455f10f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.202935143Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1c39bd96-1b0e-456a-8bb5-f787d47d6e86 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:16:37 no-preload-872727 crio[838]: time="2025-11-08T10:16:37.204997879Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.428749072Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1c39bd96-1b0e-456a-8bb5-f787d47d6e86 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.429755611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=58c04693-af36-472d-8c46-f3729818c160 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.433082916Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f14b897-0bd6-4b19-bf3a-d9dee12d701b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.438549799Z" level=info msg="Creating container: default/busybox/busybox" id=692901b2-45c9-4f12-adbd-3fc4787d1acc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.43867917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.443439102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.44391356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.460146996Z" level=info msg="Created container fe9632c3aac445bd004dc9b50dab66d359d2322665a3268ad6d831cc7162aa0d: default/busybox/busybox" id=692901b2-45c9-4f12-adbd-3fc4787d1acc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.461100981Z" level=info msg="Starting container: fe9632c3aac445bd004dc9b50dab66d359d2322665a3268ad6d831cc7162aa0d" id=d65a5b42-19b1-45c1-82a3-41f644ea089f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:16:39 no-preload-872727 crio[838]: time="2025-11-08T10:16:39.462643106Z" level=info msg="Started container" PID=2555 containerID=fe9632c3aac445bd004dc9b50dab66d359d2322665a3268ad6d831cc7162aa0d description=default/busybox/busybox id=d65a5b42-19b1-45c1-82a3-41f644ea089f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fe9632c3aac44       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   9010639f7b488       busybox                                     default
	5af38437b6102       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   9bdd0037874ef       coredns-66bc5c9577-7xnlf                    kube-system
	cf09eecac49f8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   38901c611c844       storage-provisioner                         kube-system
	bdf7c4f36d5bb       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   27101c653f4d6       kindnet-lld9n                               kube-system
	7f1906cee38c9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   75a95ffc83ab3       kube-proxy-tl7z2                            kube-system
	532dcb24c299e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   d0bf842f8842b       kube-apiserver-no-preload-872727            kube-system
	5e0b8bf4ecdd8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   5e48facd579b2       kube-controller-manager-no-preload-872727   kube-system
	3ca7c9fe505ec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   5691183f639fa       kube-scheduler-no-preload-872727            kube-system
	d8ebd03dd8bda       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   eaa6c85f74714       etcd-no-preload-872727                      kube-system
	
	
	==> coredns [5af38437b610210204df3e03a1efe0cb72464fea5486b1ca0aa5d94a29b617a8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57562 - 30563 "HINFO IN 1576033975734912766.2843902238600645529. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035911641s
	
	
	==> describe nodes <==
	Name:               no-preload-872727
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-872727
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-872727
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-872727
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:16:42 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:16:42 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:16:42 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:16:42 +0000   Sat, 08 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-872727
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                f5ae8ced-8225-4268-ba4a-f32dd64e1a62
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-7xnlf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-872727                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-lld9n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-872727             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-872727    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-tl7z2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-872727             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-872727 event: Registered Node no-preload-872727 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-872727 status is now: NodeReady
	
	
	==> dmesg <==
	[ +37.319908] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d8ebd03dd8bda2b1106320bbca2495b4446275702dbc2d061d0d798f689563e5] <==
	{"level":"warn","ts":"2025-11-08T10:16:06.340672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.386409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.411058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.460234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.514967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.535988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.566976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.585550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.603877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.652066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.675387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.713093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.746426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.785616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.822535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.865151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.884375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.907436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.936975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:06.999272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:07.047930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:07.101821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:07.137538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:07.169372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:07.360001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:16:47 up  2:59,  0 user,  load average: 5.54, 3.91, 2.76
	Linux no-preload-872727 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdf7c4f36d5bbf38cd37bab1325bb36f6fb2191efa9b77453b127179cef5cd4d] <==
	I1108 10:16:22.118419       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:16:22.118762       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:16:22.118896       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:16:22.118914       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:16:22.118926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:16:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:16:22.316098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:16:22.316124       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:16:22.316135       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:16:22.329050       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:16:22.416951       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:16:22.417046       1 metrics.go:72] Registering metrics
	I1108 10:16:22.417133       1 controller.go:711] "Syncing nftables rules"
	I1108 10:16:32.323711       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:16:32.323764       1 main.go:301] handling current node
	I1108 10:16:42.316172       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:16:42.316213       1 main.go:301] handling current node
	
	
	==> kube-apiserver [532dcb24c299ef9de2e6c8f76cffd5ec27b8c189147f47f282d039b446f53d0f] <==
	E1108 10:16:08.818814       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1108 10:16:08.867533       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:16:08.895155       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:08.895290       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:16:08.922339       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:08.928813       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:16:09.023012       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:16:09.440102       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:16:09.446931       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:16:09.447018       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:16:10.453964       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:16:10.511214       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:16:10.665358       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:16:10.677474       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:16:10.678790       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:16:10.684532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:16:10.741194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:16:11.786453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:16:11.819365       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:16:11.841076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:16:16.432884       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:16.464120       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:16.550578       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 10:16:16.764319       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1108 10:16:45.698697       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45944: use of closed network connection
	
	
	==> kube-controller-manager [5e0b8bf4ecdd80092200a27e68b042fe2da20138527652d61e532cfbfa4dd8e2] <==
	I1108 10:16:15.888333       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:16:15.888732       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:16:15.893263       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:16:15.899780       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:16:15.938806       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:16:15.938992       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:16:15.939081       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:16:15.939147       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:16:15.939323       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:16:15.939487       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:16:15.940121       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:16:15.943835       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:16:15.944014       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:16:15.947920       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:16:15.948001       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:16:15.954492       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:16:15.956191       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:16:15.986666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:16:15.986762       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:16:15.986795       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:16:15.988224       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:16:15.994445       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:16:15.998992       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:16:16.000188       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:16:35.876252       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7f1906cee38c94b5e62a41cd5999c5b30b860cbc2c68644bede3dfed6ee8d168] <==
	I1108 10:16:17.596852       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:16:17.762379       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:16:17.863671       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:16:17.863707       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:16:17.863790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:16:18.070740       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:16:18.070794       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:16:18.100272       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:16:18.100616       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:16:18.100629       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:16:18.102681       1 config.go:200] "Starting service config controller"
	I1108 10:16:18.102692       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:16:18.102707       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:16:18.102711       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:16:18.102724       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:16:18.102728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:16:18.103365       1 config.go:309] "Starting node config controller"
	I1108 10:16:18.103373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:16:18.103379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:16:18.203295       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:16:18.203331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:16:18.203366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ca7c9fe505ec743b8d99de18c45c46d5a525ebff15323d09c211e1aabec8573] <==
	E1108 10:16:08.773848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:16:08.773914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:16:08.773984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:16:08.774060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:16:08.781298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:16:08.774126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:16:08.781703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:16:08.781760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:16:08.781802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:16:08.781841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:16:08.781880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:16:08.781913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:16:08.781949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:16:08.781993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:16:08.782032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:16:08.782096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:16:09.653054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:16:09.760292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:16:09.787206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 10:16:09.839102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:16:09.842670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:16:09.884540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:16:09.957919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:16:09.966304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1108 10:16:12.700382       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807059    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b0ad3cfc-5d6d-4d1a-8688-05568684a055-cni-cfg\") pod \"kindnet-lld9n\" (UID: \"b0ad3cfc-5d6d-4d1a-8688-05568684a055\") " pod="kube-system/kindnet-lld9n"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807080    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ad3cfc-5d6d-4d1a-8688-05568684a055-lib-modules\") pod \"kindnet-lld9n\" (UID: \"b0ad3cfc-5d6d-4d1a-8688-05568684a055\") " pod="kube-system/kindnet-lld9n"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807097    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhmff\" (UniqueName: \"kubernetes.io/projected/b0ad3cfc-5d6d-4d1a-8688-05568684a055-kube-api-access-lhmff\") pod \"kindnet-lld9n\" (UID: \"b0ad3cfc-5d6d-4d1a-8688-05568684a055\") " pod="kube-system/kindnet-lld9n"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807121    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/355abcec-162c-4e65-9dbe-35499009532f-xtables-lock\") pod \"kube-proxy-tl7z2\" (UID: \"355abcec-162c-4e65-9dbe-35499009532f\") " pod="kube-system/kube-proxy-tl7z2"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807136    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/355abcec-162c-4e65-9dbe-35499009532f-lib-modules\") pod \"kube-proxy-tl7z2\" (UID: \"355abcec-162c-4e65-9dbe-35499009532f\") " pod="kube-system/kube-proxy-tl7z2"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807156    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4ln\" (UniqueName: \"kubernetes.io/projected/355abcec-162c-4e65-9dbe-35499009532f-kube-api-access-hg4ln\") pod \"kube-proxy-tl7z2\" (UID: \"355abcec-162c-4e65-9dbe-35499009532f\") " pod="kube-system/kube-proxy-tl7z2"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: I1108 10:16:16.807171    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ad3cfc-5d6d-4d1a-8688-05568684a055-xtables-lock\") pod \"kindnet-lld9n\" (UID: \"b0ad3cfc-5d6d-4d1a-8688-05568684a055\") " pod="kube-system/kindnet-lld9n"
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: E1108 10:16:16.961924    2030 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: E1108 10:16:16.961961    2030 projected.go:196] Error preparing data for projected volume kube-api-access-lhmff for pod kube-system/kindnet-lld9n: configmap "kube-root-ca.crt" not found
	Nov 08 10:16:16 no-preload-872727 kubelet[2030]: E1108 10:16:16.962035    2030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0ad3cfc-5d6d-4d1a-8688-05568684a055-kube-api-access-lhmff podName:b0ad3cfc-5d6d-4d1a-8688-05568684a055 nodeName:}" failed. No retries permitted until 2025-11-08 10:16:17.462010871 +0000 UTC m=+5.775989661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lhmff" (UniqueName: "kubernetes.io/projected/b0ad3cfc-5d6d-4d1a-8688-05568684a055-kube-api-access-lhmff") pod "kindnet-lld9n" (UID: "b0ad3cfc-5d6d-4d1a-8688-05568684a055") : configmap "kube-root-ca.crt" not found
	Nov 08 10:16:17 no-preload-872727 kubelet[2030]: I1108 10:16:17.011809    2030 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:16:17 no-preload-872727 kubelet[2030]: W1108 10:16:17.296256    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/crio-75a95ffc83ab351630f5487f503a41ee6f35acd668e9dd530b61c7a6d52dac9f WatchSource:0}: Error finding container 75a95ffc83ab351630f5487f503a41ee6f35acd668e9dd530b61c7a6d52dac9f: Status 404 returned error can't find the container with id 75a95ffc83ab351630f5487f503a41ee6f35acd668e9dd530b61c7a6d52dac9f
	Nov 08 10:16:18 no-preload-872727 kubelet[2030]: I1108 10:16:18.184098    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tl7z2" podStartSLOduration=2.184084105 podStartE2EDuration="2.184084105s" podCreationTimestamp="2025-11-08 10:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:16:18.183782835 +0000 UTC m=+6.497761641" watchObservedRunningTime="2025-11-08 10:16:18.184084105 +0000 UTC m=+6.498062895"
	Nov 08 10:16:22 no-preload-872727 kubelet[2030]: I1108 10:16:22.188351    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lld9n" podStartSLOduration=1.981777895 podStartE2EDuration="6.18833238s" podCreationTimestamp="2025-11-08 10:16:16 +0000 UTC" firstStartedPulling="2025-11-08 10:16:17.727813237 +0000 UTC m=+6.041792027" lastFinishedPulling="2025-11-08 10:16:21.934367722 +0000 UTC m=+10.248346512" observedRunningTime="2025-11-08 10:16:22.182898844 +0000 UTC m=+10.496877634" watchObservedRunningTime="2025-11-08 10:16:22.18833238 +0000 UTC m=+10.502311170"
	Nov 08 10:16:32 no-preload-872727 kubelet[2030]: I1108 10:16:32.520137    2030 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:16:32 no-preload-872727 kubelet[2030]: I1108 10:16:32.736350    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2r9\" (UniqueName: \"kubernetes.io/projected/ee982620-6159-4ebb-8e21-781fc55700b0-kube-api-access-nv2r9\") pod \"coredns-66bc5c9577-7xnlf\" (UID: \"ee982620-6159-4ebb-8e21-781fc55700b0\") " pod="kube-system/coredns-66bc5c9577-7xnlf"
	Nov 08 10:16:32 no-preload-872727 kubelet[2030]: I1108 10:16:32.736406    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8dcb4f3f-f5f5-4ce7-a1e2-1def17299376-tmp\") pod \"storage-provisioner\" (UID: \"8dcb4f3f-f5f5-4ce7-a1e2-1def17299376\") " pod="kube-system/storage-provisioner"
	Nov 08 10:16:32 no-preload-872727 kubelet[2030]: I1108 10:16:32.736438    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wldh\" (UniqueName: \"kubernetes.io/projected/8dcb4f3f-f5f5-4ce7-a1e2-1def17299376-kube-api-access-2wldh\") pod \"storage-provisioner\" (UID: \"8dcb4f3f-f5f5-4ce7-a1e2-1def17299376\") " pod="kube-system/storage-provisioner"
	Nov 08 10:16:32 no-preload-872727 kubelet[2030]: I1108 10:16:32.736458    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee982620-6159-4ebb-8e21-781fc55700b0-config-volume\") pod \"coredns-66bc5c9577-7xnlf\" (UID: \"ee982620-6159-4ebb-8e21-781fc55700b0\") " pod="kube-system/coredns-66bc5c9577-7xnlf"
	Nov 08 10:16:33 no-preload-872727 kubelet[2030]: W1108 10:16:33.197998    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/crio-38901c611c8447db86a175141c557d78ab52b84d0189ec26d07f4762e3b2301c WatchSource:0}: Error finding container 38901c611c8447db86a175141c557d78ab52b84d0189ec26d07f4762e3b2301c: Status 404 returned error can't find the container with id 38901c611c8447db86a175141c557d78ab52b84d0189ec26d07f4762e3b2301c
	Nov 08 10:16:33 no-preload-872727 kubelet[2030]: W1108 10:16:33.223197    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/crio-9bdd0037874ef4c134052e98771ed1f0d1ff185f4ec03f4050b42a595e363a41 WatchSource:0}: Error finding container 9bdd0037874ef4c134052e98771ed1f0d1ff185f4ec03f4050b42a595e363a41: Status 404 returned error can't find the container with id 9bdd0037874ef4c134052e98771ed1f0d1ff185f4ec03f4050b42a595e363a41
	Nov 08 10:16:34 no-preload-872727 kubelet[2030]: I1108 10:16:34.229810    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.229789086 podStartE2EDuration="16.229789086s" podCreationTimestamp="2025-11-08 10:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:16:34.209167423 +0000 UTC m=+22.523146229" watchObservedRunningTime="2025-11-08 10:16:34.229789086 +0000 UTC m=+22.543767884"
	Nov 08 10:16:36 no-preload-872727 kubelet[2030]: I1108 10:16:36.568089    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7xnlf" podStartSLOduration=19.56806978 podStartE2EDuration="19.56806978s" podCreationTimestamp="2025-11-08 10:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:16:34.243163939 +0000 UTC m=+22.557142827" watchObservedRunningTime="2025-11-08 10:16:36.56806978 +0000 UTC m=+24.882048570"
	Nov 08 10:16:36 no-preload-872727 kubelet[2030]: I1108 10:16:36.769858    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z86t\" (UniqueName: \"kubernetes.io/projected/f23722ee-2a7d-4548-b3a6-705dd0782670-kube-api-access-6z86t\") pod \"busybox\" (UID: \"f23722ee-2a7d-4548-b3a6-705dd0782670\") " pod="default/busybox"
	Nov 08 10:16:37 no-preload-872727 kubelet[2030]: W1108 10:16:37.199276    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/crio-9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255 WatchSource:0}: Error finding container 9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255: Status 404 returned error can't find the container with id 9010639f7b4885d7e99be801594321aba34928062c31cf26ef40f0c3e908d255
	
	
	==> storage-provisioner [cf09eecac49f85fb459ee1dd116877bc7c33198542750f28241d4d7c233f4d4a] <==
	I1108 10:16:33.335190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:16:33.373491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:16:33.373694       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:16:33.385421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:33.398690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:16:33.399076       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:16:33.399281       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-872727_f6f23bfe-f1ef-45e7-ab14-8d23aaabc411!
	I1108 10:16:33.408476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e3bc8d2-5847-4f52-bedc-77da0e14b7f9", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-872727_f6f23bfe-f1ef-45e7-ab14-8d23aaabc411 became leader
	W1108 10:16:33.432536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:33.455728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:16:33.510049       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-872727_f6f23bfe-f1ef-45e7-ab14-8d23aaabc411!
	W1108 10:16:35.458851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:35.466052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:37.468709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:37.476509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:39.479469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:39.486628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:41.490372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:41.495690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:43.498980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:43.503459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:45.506831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:45.512089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:47.516210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:16:47.524451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-872727 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (453.563545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-606645 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-606645 describe deploy/metrics-server -n kube-system: exit status 1 (151.167831ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-606645 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-606645
helpers_test.go:243: (dbg) docker inspect embed-certs-606645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	        "Created": "2025-11-08T10:15:58.52351748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:15:58.577542084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hostname",
	        "HostsPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hosts",
	        "LogPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431-json.log",
	        "Name": "/embed-certs-606645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-606645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-606645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	                "LowerDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-606645",
	                "Source": "/var/lib/docker/volumes/embed-certs-606645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-606645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-606645",
	                "name.minikube.sigs.k8s.io": "embed-certs-606645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "067b4351ac2316b966ead89f87baa209277105a5e0cbfb30e6ebb4b504208fb3",
	            "SandboxKey": "/var/run/docker/netns/067b4351ac23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-606645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:10:6e:73:ef:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "805d16fd71681779d29643ac47fdf579dc44f7ad5660dcf2f7e7941c9bae9d2a",
	                    "EndpointID": "f7fb46118bdb268771e2ceea64e71f533035e0c4d97bdd34d7236e2641dd8aa7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-606645",
	                        "d42979033f3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25: (1.716128066s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p pause-585281                                                                                                                                                                                                                               │ pause-585281             │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ delete  │ -p force-systemd-env-000082                                                                                                                                                                                                                   │ force-systemd-env-000082 │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:11 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:11 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ cert-options-916440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440      │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489   │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727        │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645       │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:17:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:17:00.863220  485498 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:17:00.863408  485498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:00.863421  485498 out.go:374] Setting ErrFile to fd 2...
	I1108 10:17:00.863428  485498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:00.863749  485498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:17:00.864180  485498 out.go:368] Setting JSON to false
	I1108 10:17:00.865320  485498 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10770,"bootTime":1762586251,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:17:00.865413  485498 start.go:143] virtualization:  
	I1108 10:17:00.868545  485498 out.go:179] * [no-preload-872727] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:17:00.872554  485498 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:17:00.872704  485498 notify.go:221] Checking for updates...
	I1108 10:17:00.879296  485498 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:17:00.882209  485498 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:00.885692  485498 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:17:00.888675  485498 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:17:00.891648  485498 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:17:00.895246  485498 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:00.895868  485498 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:17:00.933516  485498 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:17:00.933639  485498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:00.995990  485498 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:00.98554706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:00.996116  485498 docker.go:319] overlay module found
	I1108 10:17:00.999211  485498 out.go:179] * Using the docker driver based on existing profile
	I1108 10:17:01.002355  485498 start.go:309] selected driver: docker
	I1108 10:17:01.002384  485498 start.go:930] validating driver "docker" against &{Name:no-preload-872727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-872727 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:01.002500  485498 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:17:01.003300  485498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:01.061801  485498 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:01.051317904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:01.062281  485498 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:01.062320  485498 cni.go:84] Creating CNI manager for ""
	I1108 10:17:01.062376  485498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:01.062424  485498 start.go:353] cluster config:
	{Name:no-preload-872727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-872727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:01.065905  485498 out.go:179] * Starting "no-preload-872727" primary control-plane node in "no-preload-872727" cluster
	I1108 10:17:01.069089  485498 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:17:01.072165  485498 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:17:01.075062  485498 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:01.075177  485498 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:17:01.075247  485498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/config.json ...
	I1108 10:17:01.075528  485498 cache.go:107] acquiring lock: {Name:mkb442361a3d693952fc672882f6b6b7213bc849 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075628  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 10:17:01.075643  485498 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.335µs
	I1108 10:17:01.075651  485498 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 10:17:01.075670  485498 cache.go:107] acquiring lock: {Name:mke3644ef00412590c51848a7c516b6a29989bff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075724  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 10:17:01.075734  485498 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 65.256µs
	I1108 10:17:01.075740  485498 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 10:17:01.075750  485498 cache.go:107] acquiring lock: {Name:mk8ae59e5b04a9c56126763a91e2a50d56db3baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075783  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 10:17:01.075792  485498 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.504µs
	I1108 10:17:01.075799  485498 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 10:17:01.075809  485498 cache.go:107] acquiring lock: {Name:mk86e26b2f73ce2e9f3521d64f7eddde7b1a7834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075840  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 10:17:01.075851  485498 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.985µs
	I1108 10:17:01.075858  485498 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 10:17:01.075868  485498 cache.go:107] acquiring lock: {Name:mkd196be9375c5975b6926d18ee16cb514fef751 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075905  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 10:17:01.075914  485498 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 46.958µs
	I1108 10:17:01.075921  485498 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 10:17:01.075936  485498 cache.go:107] acquiring lock: {Name:mkdbcaa257b5d0e1156ae684a03e1db0e4fbb110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.075966  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1108 10:17:01.075976  485498 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.886µs
	I1108 10:17:01.076072  485498 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 10:17:01.076099  485498 cache.go:107] acquiring lock: {Name:mk6859d6bebec929976433582cd26f4c6d3b3716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.076148  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 10:17:01.076161  485498 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 65.396µs
	I1108 10:17:01.076168  485498 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 10:17:01.076179  485498 cache.go:107] acquiring lock: {Name:mkd90f88b1e7bac80d7dfb30db61ce9a072c74b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.076212  485498 cache.go:115] /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 10:17:01.076226  485498 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 45.268µs
	I1108 10:17:01.076251  485498 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 10:17:01.076272  485498 cache.go:87] Successfully saved all images to host disk.
	I1108 10:17:01.101709  485498 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:17:01.101731  485498 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:17:01.101799  485498 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:17:01.101872  485498 start.go:360] acquireMachinesLock for no-preload-872727: {Name:mk1d2a5a5cbaa85e8c94e98e56901783e72603e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:01.101959  485498 start.go:364] duration metric: took 65.19µs to acquireMachinesLock for "no-preload-872727"
	I1108 10:17:01.102009  485498 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:17:01.102019  485498 fix.go:54] fixHost starting: 
	I1108 10:17:01.102422  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:17:01.126294  485498 fix.go:112] recreateIfNeeded on no-preload-872727: state=Stopped err=<nil>
	W1108 10:17:01.126329  485498 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:16:57.931834  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	W1108 10:17:00.432850  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	I1108 10:17:01.129776  485498 out.go:252] * Restarting existing docker container for "no-preload-872727" ...
	I1108 10:17:01.129873  485498 cli_runner.go:164] Run: docker start no-preload-872727
	I1108 10:17:01.412153  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:17:01.440104  485498 kic.go:430] container "no-preload-872727" state is running.
	I1108 10:17:01.441479  485498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-872727
	I1108 10:17:01.469908  485498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/config.json ...
	I1108 10:17:01.470152  485498 machine.go:94] provisionDockerMachine start ...
	I1108 10:17:01.470218  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:01.490499  485498 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:01.490819  485498 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1108 10:17:01.490833  485498 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:17:01.491417  485498 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59144->127.0.0.1:33438: read: connection reset by peer
	I1108 10:17:04.648796  485498 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-872727
	
	I1108 10:17:04.648820  485498 ubuntu.go:182] provisioning hostname "no-preload-872727"
	I1108 10:17:04.648892  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:04.668525  485498 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:04.668890  485498 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1108 10:17:04.668902  485498 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-872727 && echo "no-preload-872727" | sudo tee /etc/hostname
	I1108 10:17:04.831376  485498 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-872727
	
	I1108 10:17:04.831536  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:04.850105  485498 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:04.850431  485498 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1108 10:17:04.850455  485498 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-872727' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-872727/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-872727' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:17:05.010774  485498 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:17:05.010860  485498 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:17:05.010933  485498 ubuntu.go:190] setting up certificates
	I1108 10:17:05.010959  485498 provision.go:84] configureAuth start
	I1108 10:17:05.011052  485498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-872727
	I1108 10:17:05.032770  485498 provision.go:143] copyHostCerts
	I1108 10:17:05.032844  485498 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:17:05.032861  485498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:17:05.032988  485498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:17:05.033132  485498 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:17:05.033139  485498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:17:05.033167  485498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:17:05.033225  485498 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:17:05.033230  485498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:17:05.033254  485498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:17:05.033304  485498 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.no-preload-872727 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-872727]
	I1108 10:17:05.323960  485498 provision.go:177] copyRemoteCerts
	I1108 10:17:05.324068  485498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:17:05.324127  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:05.343633  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:05.449215  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:17:05.469461  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:17:05.488770  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:17:05.508658  485498 provision.go:87] duration metric: took 497.661779ms to configureAuth
	I1108 10:17:05.508684  485498 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:17:05.508885  485498 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:05.509088  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:05.527432  485498 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:05.527749  485498 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1108 10:17:05.527771  485498 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:17:05.862392  485498 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:17:05.862420  485498 machine.go:97] duration metric: took 4.392257223s to provisionDockerMachine
	I1108 10:17:05.862432  485498 start.go:293] postStartSetup for "no-preload-872727" (driver="docker")
	I1108 10:17:05.862443  485498 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:17:05.862507  485498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:17:05.862574  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:05.886768  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:06.003817  485498 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:17:06.009756  485498 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:17:06.009786  485498 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:17:06.009804  485498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:17:06.009879  485498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:17:06.009962  485498 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:17:06.010093  485498 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:17:06.020020  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:06.041166  485498 start.go:296] duration metric: took 178.718809ms for postStartSetup
	I1108 10:17:06.041275  485498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:17:06.041322  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:06.060542  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:06.170352  485498 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:17:06.175700  485498 fix.go:56] duration metric: took 5.073673156s for fixHost
	I1108 10:17:06.175726  485498 start.go:83] releasing machines lock for "no-preload-872727", held for 5.073753312s
	I1108 10:17:06.175797  485498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-872727
	I1108 10:17:06.193150  485498 ssh_runner.go:195] Run: cat /version.json
	I1108 10:17:06.193201  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:06.193207  485498 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:17:06.193280  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:06.216677  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:06.226601  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:06.426283  485498 ssh_runner.go:195] Run: systemctl --version
	I1108 10:17:06.434037  485498 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:17:06.472616  485498 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:17:06.477306  485498 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:17:06.477378  485498 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:17:06.486426  485498 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:17:06.486451  485498 start.go:496] detecting cgroup driver to use...
	I1108 10:17:06.486499  485498 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:17:06.486560  485498 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:17:06.502562  485498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:17:06.516973  485498 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:17:06.517094  485498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:17:06.533594  485498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:17:06.550475  485498 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:17:06.666905  485498 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:17:06.788688  485498 docker.go:234] disabling docker service ...
	I1108 10:17:06.788770  485498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:17:06.804789  485498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:17:06.818981  485498 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:17:06.935475  485498 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:17:07.054554  485498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:17:07.069350  485498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:17:07.084984  485498 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:17:07.085061  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.095104  485498 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:17:07.095187  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.105153  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.117593  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.128749  485498 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:17:07.139171  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.149700  485498 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.159265  485498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:07.168805  485498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:17:07.178753  485498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:17:07.187015  485498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:07.304181  485498 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:17:07.444385  485498 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:17:07.444509  485498 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:17:07.448552  485498 start.go:564] Will wait 60s for crictl version
	I1108 10:17:07.448654  485498 ssh_runner.go:195] Run: which crictl
	I1108 10:17:07.452377  485498 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:17:07.483510  485498 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:17:07.483629  485498 ssh_runner.go:195] Run: crio --version
	I1108 10:17:07.513934  485498 ssh_runner.go:195] Run: crio --version
	I1108 10:17:07.547575  485498 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 10:17:02.933586  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	W1108 10:17:05.432725  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	I1108 10:17:07.550481  485498 cli_runner.go:164] Run: docker network inspect no-preload-872727 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:17:07.566747  485498 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:17:07.570721  485498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:07.581083  485498 kubeadm.go:884] updating cluster {Name:no-preload-872727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-872727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:17:07.581193  485498 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:07.581248  485498 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:17:07.617880  485498 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:17:07.617906  485498 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:17:07.617914  485498 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:17:07.618013  485498 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-872727 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-872727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:17:07.618095  485498 ssh_runner.go:195] Run: crio config
	I1108 10:17:07.683741  485498 cni.go:84] Creating CNI manager for ""
	I1108 10:17:07.683812  485498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:07.683844  485498 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:17:07.683907  485498 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-872727 NodeName:no-preload-872727 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:17:07.684081  485498 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-872727"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:17:07.684194  485498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:17:07.693525  485498 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:17:07.693644  485498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:17:07.701773  485498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:17:07.715963  485498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:17:07.729822  485498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 10:17:07.743456  485498 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:17:07.746907  485498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:07.757166  485498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:07.871385  485498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:07.889345  485498 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727 for IP: 192.168.85.2
	I1108 10:17:07.889421  485498 certs.go:195] generating shared ca certs ...
	I1108 10:17:07.889481  485498 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:07.889719  485498 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:17:07.889807  485498 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:17:07.889850  485498 certs.go:257] generating profile certs ...
	I1108 10:17:07.890016  485498 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.key
	I1108 10:17:07.890137  485498 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/apiserver.key.969ee8c8
	I1108 10:17:07.890216  485498 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/proxy-client.key
	I1108 10:17:07.890387  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:17:07.890462  485498 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:17:07.890489  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:17:07.890549  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:17:07.890604  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:17:07.890683  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:17:07.890795  485498 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:07.891639  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:17:07.916565  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:17:07.935998  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:17:07.954980  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:17:07.973489  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:17:07.991294  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:17:08.026302  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:17:08.063054  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:17:08.093980  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:17:08.126685  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:17:08.149035  485498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:17:08.183566  485498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:17:08.200894  485498 ssh_runner.go:195] Run: openssl version
	I1108 10:17:08.210725  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:17:08.221155  485498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:17:08.225232  485498 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:17:08.225316  485498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:17:08.267971  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:17:08.280068  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:17:08.289810  485498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:17:08.293799  485498 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:17:08.293898  485498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:17:08.337932  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:17:08.347203  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:17:08.357832  485498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:08.361865  485498 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:08.361980  485498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:08.403997  485498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:17:08.412371  485498 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:17:08.416857  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:17:08.459481  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:17:08.501255  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:17:08.543169  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:17:08.584445  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:17:08.632418  485498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:17:08.689772  485498 kubeadm.go:401] StartCluster: {Name:no-preload-872727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-872727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:08.689874  485498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:17:08.689973  485498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:17:08.750950  485498 cri.go:89] found id: "af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b"
	I1108 10:17:08.750974  485498 cri.go:89] found id: ""
	I1108 10:17:08.751048  485498 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:17:08.774683  485498 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:17:08Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:17:08.774796  485498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:17:08.803488  485498 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:17:08.803505  485498 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:17:08.803577  485498 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:17:08.830720  485498 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:17:08.831615  485498 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-872727" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:08.832191  485498 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-872727" cluster setting kubeconfig missing "no-preload-872727" context setting]
	I1108 10:17:08.833017  485498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:08.834864  485498 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:17:08.853218  485498 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:17:08.853259  485498 kubeadm.go:602] duration metric: took 49.747748ms to restartPrimaryControlPlane
	I1108 10:17:08.853274  485498 kubeadm.go:403] duration metric: took 163.513968ms to StartCluster
	I1108 10:17:08.853316  485498 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:08.853397  485498 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:08.854908  485498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:08.855182  485498 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:17:08.855594  485498 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:08.855596  485498 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:17:08.855755  485498 addons.go:70] Setting storage-provisioner=true in profile "no-preload-872727"
	I1108 10:17:08.855764  485498 addons.go:70] Setting dashboard=true in profile "no-preload-872727"
	I1108 10:17:08.855772  485498 addons.go:239] Setting addon storage-provisioner=true in "no-preload-872727"
	I1108 10:17:08.855778  485498 addons.go:239] Setting addon dashboard=true in "no-preload-872727"
	I1108 10:17:08.855785  485498 addons.go:70] Setting default-storageclass=true in profile "no-preload-872727"
	W1108 10:17:08.855779  485498 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:17:08.855797  485498 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-872727"
	I1108 10:17:08.855813  485498 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:17:08.856109  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	W1108 10:17:08.855787  485498 addons.go:248] addon dashboard should already be in state true
	I1108 10:17:08.856472  485498 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:17:08.856905  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:17:08.857102  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:17:08.863083  485498 out.go:179] * Verifying Kubernetes components...
	I1108 10:17:08.866554  485498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:08.932300  485498 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:17:08.934354  485498 addons.go:239] Setting addon default-storageclass=true in "no-preload-872727"
	W1108 10:17:08.934383  485498 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:17:08.934464  485498 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:17:08.934966  485498 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:17:08.964035  485498 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:17:08.965041  485498 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:17:08.970956  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:17:08.970983  485498 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:17:08.970998  485498 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:08.971013  485498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:17:08.971058  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:08.971062  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:08.991727  485498 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:08.991750  485498 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:17:08.991822  485498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:17:09.033052  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:09.033728  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:09.042772  485498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:17:09.230736  485498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:09.255568  485498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:09.297045  485498 node_ready.go:35] waiting up to 6m0s for node "no-preload-872727" to be "Ready" ...
	I1108 10:17:09.341643  485498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:09.358517  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:17:09.358595  485498 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:17:09.451481  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:17:09.451542  485498 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:17:09.535229  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:17:09.535300  485498 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:17:09.574415  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:17:09.574483  485498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:17:09.590209  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:17:09.590232  485498 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:17:09.605595  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:17:09.605661  485498 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:17:09.619856  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:17:09.619921  485498 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:17:09.634621  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:17:09.634688  485498 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:17:09.651612  485498 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:17:09.651677  485498 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:17:09.665889  485498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1108 10:17:07.932431  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	W1108 10:17:10.432684  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	I1108 10:17:13.772569  485498 node_ready.go:49] node "no-preload-872727" is "Ready"
	I1108 10:17:13.772600  485498 node_ready.go:38] duration metric: took 4.475468221s for node "no-preload-872727" to be "Ready" ...
	I1108 10:17:13.772614  485498 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:17:13.772672  485498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:17:14.073643  485498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.817987879s)
	I1108 10:17:15.848500  485498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.506773471s)
	I1108 10:17:15.976563  485498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.310585101s)
	I1108 10:17:15.976764  485498 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.20407546s)
	I1108 10:17:15.976783  485498 api_server.go:72] duration metric: took 7.121566971s to wait for apiserver process to appear ...
	I1108 10:17:15.976800  485498 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:17:15.976818  485498 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:17:15.979727  485498 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-872727 addons enable metrics-server
	
	I1108 10:17:15.982773  485498 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1108 10:17:12.932080  481559 node_ready.go:57] node "embed-certs-606645" has "Ready":"False" status (will retry)
	I1108 10:17:14.934573  481559 node_ready.go:49] node "embed-certs-606645" is "Ready"
	I1108 10:17:14.934649  481559 node_ready.go:38] duration metric: took 41.505899016s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:17:14.934677  481559 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:17:14.934756  481559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:17:14.965467  481559 api_server.go:72] duration metric: took 42.175672236s to wait for apiserver process to appear ...
	I1108 10:17:14.965540  481559 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:17:14.965574  481559 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:17:14.982448  481559 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:17:14.985939  481559 api_server.go:141] control plane version: v1.34.1
	I1108 10:17:14.985965  481559 api_server.go:131] duration metric: took 20.40338ms to wait for apiserver health ...
	I1108 10:17:14.985974  481559 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:17:14.998501  481559 system_pods.go:59] 8 kube-system pods found
	I1108 10:17:14.998591  481559 system_pods.go:61] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:14.998613  481559 system_pods.go:61] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running
	I1108 10:17:14.998639  481559 system_pods.go:61] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running
	I1108 10:17:14.998671  481559 system_pods.go:61] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running
	I1108 10:17:14.998692  481559 system_pods.go:61] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running
	I1108 10:17:14.998713  481559 system_pods.go:61] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running
	I1108 10:17:14.998741  481559 system_pods.go:61] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running
	I1108 10:17:14.998774  481559 system_pods.go:61] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:14.998795  481559 system_pods.go:74] duration metric: took 12.814672ms to wait for pod list to return data ...
	I1108 10:17:14.998817  481559 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:17:15.004846  481559 default_sa.go:45] found service account: "default"
	I1108 10:17:15.004996  481559 default_sa.go:55] duration metric: took 6.143649ms for default service account to be created ...
	I1108 10:17:15.005026  481559 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:17:15.024951  481559 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:15.025056  481559 system_pods.go:89] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:15.025078  481559 system_pods.go:89] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running
	I1108 10:17:15.025113  481559 system_pods.go:89] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running
	I1108 10:17:15.025132  481559 system_pods.go:89] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running
	I1108 10:17:15.025165  481559 system_pods.go:89] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running
	I1108 10:17:15.025199  481559 system_pods.go:89] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running
	I1108 10:17:15.025220  481559 system_pods.go:89] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running
	I1108 10:17:15.025245  481559 system_pods.go:89] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:15.025302  481559 retry.go:31] will retry after 263.637586ms: missing components: kube-dns
	I1108 10:17:15.296829  481559 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:15.296930  481559 system_pods.go:89] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:15.296953  481559 system_pods.go:89] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running
	I1108 10:17:15.296984  481559 system_pods.go:89] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running
	I1108 10:17:15.297013  481559 system_pods.go:89] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running
	I1108 10:17:15.297035  481559 system_pods.go:89] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running
	I1108 10:17:15.297067  481559 system_pods.go:89] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running
	I1108 10:17:15.297087  481559 system_pods.go:89] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running
	I1108 10:17:15.297165  481559 system_pods.go:89] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:15.297198  481559 retry.go:31] will retry after 374.170088ms: missing components: kube-dns
	I1108 10:17:15.676230  481559 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:15.676311  481559 system_pods.go:89] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Running
	I1108 10:17:15.676333  481559 system_pods.go:89] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running
	I1108 10:17:15.676354  481559 system_pods.go:89] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running
	I1108 10:17:15.676386  481559 system_pods.go:89] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running
	I1108 10:17:15.676406  481559 system_pods.go:89] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running
	I1108 10:17:15.676427  481559 system_pods.go:89] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running
	I1108 10:17:15.676447  481559 system_pods.go:89] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running
	I1108 10:17:15.676481  481559 system_pods.go:89] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Running
	I1108 10:17:15.676505  481559 system_pods.go:126] duration metric: took 671.457965ms to wait for k8s-apps to be running ...
	I1108 10:17:15.676525  481559 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:17:15.676622  481559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:17:15.695908  481559 system_svc.go:56] duration metric: took 19.373177ms WaitForService to wait for kubelet
	I1108 10:17:15.695940  481559 kubeadm.go:587] duration metric: took 42.906150127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:15.695959  481559 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:17:15.699910  481559 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:17:15.699942  481559 node_conditions.go:123] node cpu capacity is 2
	I1108 10:17:15.699982  481559 node_conditions.go:105] duration metric: took 3.98753ms to run NodePressure ...
	I1108 10:17:15.700002  481559 start.go:242] waiting for startup goroutines ...
	I1108 10:17:15.700011  481559 start.go:247] waiting for cluster config update ...
	I1108 10:17:15.700043  481559 start.go:256] writing updated cluster config ...
	I1108 10:17:15.700378  481559 ssh_runner.go:195] Run: rm -f paused
	I1108 10:17:15.704621  481559 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:15.708515  481559 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.714416  481559 pod_ready.go:94] pod "coredns-66bc5c9577-t2frl" is "Ready"
	I1108 10:17:15.714441  481559 pod_ready.go:86] duration metric: took 5.899906ms for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.718326  481559 pod_ready.go:83] waiting for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.724015  481559 pod_ready.go:94] pod "etcd-embed-certs-606645" is "Ready"
	I1108 10:17:15.724087  481559 pod_ready.go:86] duration metric: took 5.731905ms for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.726812  481559 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.733400  481559 pod_ready.go:94] pod "kube-apiserver-embed-certs-606645" is "Ready"
	I1108 10:17:15.733473  481559 pod_ready.go:86] duration metric: took 6.591177ms for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:15.736501  481559 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:16.111340  481559 pod_ready.go:94] pod "kube-controller-manager-embed-certs-606645" is "Ready"
	I1108 10:17:16.111416  481559 pod_ready.go:86] duration metric: took 374.852157ms for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:16.310038  481559 pod_ready.go:83] waiting for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:16.709892  481559 pod_ready.go:94] pod "kube-proxy-tvxrb" is "Ready"
	I1108 10:17:16.709921  481559 pod_ready.go:86] duration metric: took 399.814692ms for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:16.909228  481559 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:17.311273  481559 pod_ready.go:94] pod "kube-scheduler-embed-certs-606645" is "Ready"
	I1108 10:17:17.311303  481559 pod_ready.go:86] duration metric: took 402.005498ms for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:17.311317  481559 pod_ready.go:40] duration metric: took 1.6066607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:17.367419  481559 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:17:17.372809  481559 out.go:179] * Done! kubectl is now configured to use "embed-certs-606645" cluster and "default" namespace by default
	I1108 10:17:15.985629  485498 addons.go:515] duration metric: took 7.130030279s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1108 10:17:15.991577  485498 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:17:15.991602  485498 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:17:16.477102  485498 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:17:16.486507  485498 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:17:16.487721  485498 api_server.go:141] control plane version: v1.34.1
	I1108 10:17:16.487793  485498 api_server.go:131] duration metric: took 510.984303ms to wait for apiserver health ...
	I1108 10:17:16.487817  485498 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:17:16.492080  485498 system_pods.go:59] 8 kube-system pods found
	I1108 10:17:16.492168  485498 system_pods.go:61] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:16.492194  485498 system_pods.go:61] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:16.492215  485498 system_pods.go:61] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:17:16.492255  485498 system_pods.go:61] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:16.492282  485498 system_pods.go:61] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:16.492303  485498 system_pods.go:61] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:17:16.492337  485498 system_pods.go:61] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:16.492367  485498 system_pods.go:61] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Running
	I1108 10:17:16.492426  485498 system_pods.go:74] duration metric: took 4.552211ms to wait for pod list to return data ...
	I1108 10:17:16.492450  485498 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:17:16.496902  485498 default_sa.go:45] found service account: "default"
	I1108 10:17:16.497003  485498 default_sa.go:55] duration metric: took 4.524822ms for default service account to be created ...
	I1108 10:17:16.497041  485498 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:17:16.591765  485498 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:16.591810  485498 system_pods.go:89] "coredns-66bc5c9577-7xnlf" [ee982620-6159-4ebb-8e21-781fc55700b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:16.591820  485498 system_pods.go:89] "etcd-no-preload-872727" [c19b8f4b-65c4-4dcd-8586-738c602db3e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:16.591826  485498 system_pods.go:89] "kindnet-lld9n" [b0ad3cfc-5d6d-4d1a-8688-05568684a055] Running
	I1108 10:17:16.591833  485498 system_pods.go:89] "kube-apiserver-no-preload-872727" [79f2cabd-27b1-40a6-97b9-6f1746991d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:16.591840  485498 system_pods.go:89] "kube-controller-manager-no-preload-872727" [234914ad-be31-4b38-8789-792c2e74387d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:16.591845  485498 system_pods.go:89] "kube-proxy-tl7z2" [355abcec-162c-4e65-9dbe-35499009532f] Running
	I1108 10:17:16.591866  485498 system_pods.go:89] "kube-scheduler-no-preload-872727" [a3965441-8378-4e08-be57-f7187b137b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:16.591883  485498 system_pods.go:89] "storage-provisioner" [8dcb4f3f-f5f5-4ce7-a1e2-1def17299376] Running
	I1108 10:17:16.591892  485498 system_pods.go:126] duration metric: took 94.832489ms to wait for k8s-apps to be running ...
	I1108 10:17:16.591905  485498 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:17:16.591967  485498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:17:16.606105  485498 system_svc.go:56] duration metric: took 14.189707ms WaitForService to wait for kubelet
	I1108 10:17:16.606177  485498 kubeadm.go:587] duration metric: took 7.750959502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:16.606213  485498 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:17:16.610738  485498 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:17:16.610818  485498 node_conditions.go:123] node cpu capacity is 2
	I1108 10:17:16.610847  485498 node_conditions.go:105] duration metric: took 4.600236ms to run NodePressure ...
	I1108 10:17:16.610874  485498 start.go:242] waiting for startup goroutines ...
	I1108 10:17:16.610904  485498 start.go:247] waiting for cluster config update ...
	I1108 10:17:16.610937  485498 start.go:256] writing updated cluster config ...
	I1108 10:17:16.611238  485498 ssh_runner.go:195] Run: rm -f paused
	I1108 10:17:16.615201  485498 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:16.691410  485498 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xnlf" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:17:18.703906  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	W1108 10:17:21.198038  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	W1108 10:17:23.199727  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	W1108 10:17:25.699100  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:17:15 embed-certs-606645 crio[842]: time="2025-11-08T10:17:15.120038008Z" level=info msg="Created container 405f4d918cd3218dab5ca5ba6c31edeab3148ece6d25ede82944a7c18f68baa2: kube-system/coredns-66bc5c9577-t2frl/coredns" id=9b8a27a8-fed6-49a8-9ffd-8dd6443d4a4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:15 embed-certs-606645 crio[842]: time="2025-11-08T10:17:15.125123228Z" level=info msg="Starting container: 405f4d918cd3218dab5ca5ba6c31edeab3148ece6d25ede82944a7c18f68baa2" id=e14d8a83-e128-4a71-bc68-d1319eef825d name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:17:15 embed-certs-606645 crio[842]: time="2025-11-08T10:17:15.126911263Z" level=info msg="Started container" PID=1725 containerID=405f4d918cd3218dab5ca5ba6c31edeab3148ece6d25ede82944a7c18f68baa2 description=kube-system/coredns-66bc5c9577-t2frl/coredns id=e14d8a83-e128-4a71-bc68-d1319eef825d name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd7928135402472ceb0ec9085729c5ebc30984801bbc833ff98c40bfe4487c2b
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.908146774Z" level=info msg="Running pod sandbox: default/busybox/POD" id=daac0001-013f-4a0d-babb-85c14df99415 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.908224428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.919980753Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25 UID:fda9aeba-3ce7-41ea-9797-1de68d199925 NetNS:/var/run/netns/54292cc0-4672-43d0-96c5-5d40ac5be732 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bc28}] Aliases:map[]}"
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.920034078Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.932064038Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25 UID:fda9aeba-3ce7-41ea-9797-1de68d199925 NetNS:/var/run/netns/54292cc0-4672-43d0-96c5-5d40ac5be732 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bc28}] Aliases:map[]}"
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.932216672Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.936216968Z" level=info msg="Ran pod sandbox 1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25 with infra container: default/busybox/POD" id=daac0001-013f-4a0d-babb-85c14df99415 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.937673604Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a519a8e8-7a43-457e-8627-c1a42a0cf961 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.937942718Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a519a8e8-7a43-457e-8627-c1a42a0cf961 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.937990448Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a519a8e8-7a43-457e-8627-c1a42a0cf961 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.939572098Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e5666af-ae62-4486-8f81-83305561737e name=/runtime.v1.ImageService/PullImage
	Nov 08 10:17:17 embed-certs-606645 crio[842]: time="2025-11-08T10:17:17.941674138Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.240239926Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0e5666af-ae62-4486-8f81-83305561737e name=/runtime.v1.ImageService/PullImage
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.241532614Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9a5cb77d-0da9-4af9-a691-dbf77072b94e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.246311428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=589db931-2e1f-468c-a693-eb510dd8a04d name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.256406882Z" level=info msg="Creating container: default/busybox/busybox" id=99cc8ee5-eab2-440c-a821-5da14d16d3c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.256688459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.26645317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.267445178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.292059713Z" level=info msg="Created container 008f6e7cf9ce94517adef10e9ceee59de5afb8ef21e3b9af75beaf8bdd658e37: default/busybox/busybox" id=99cc8ee5-eab2-440c-a821-5da14d16d3c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.296574993Z" level=info msg="Starting container: 008f6e7cf9ce94517adef10e9ceee59de5afb8ef21e3b9af75beaf8bdd658e37" id=be9f8ebb-33ee-4c0f-a40d-49a4cf91f9ce name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:17:20 embed-certs-606645 crio[842]: time="2025-11-08T10:17:20.300411407Z" level=info msg="Started container" PID=1781 containerID=008f6e7cf9ce94517adef10e9ceee59de5afb8ef21e3b9af75beaf8bdd658e37 description=default/busybox/busybox id=be9f8ebb-33ee-4c0f-a40d-49a4cf91f9ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	008f6e7cf9ce9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   1b9cf91d56145       busybox                                      default
	405f4d918cd32       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   dd79281354024       coredns-66bc5c9577-t2frl                     kube-system
	e7ba45a51016e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   b8d88ba04f826       storage-provisioner                          kube-system
	9a3b45c0d84a9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   30d4c5b3c2ec8       kube-proxy-tvxrb                             kube-system
	1b04f4cea3dda       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   d4388f47ad96d       kindnet-tb5h7                                kube-system
	8355051bd841e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   0f65063af0361       etcd-embed-certs-606645                      kube-system
	188cbfef977c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   832ed0b29c75a       kube-apiserver-embed-certs-606645            kube-system
	1e0bd33fdd6d9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   7a8e9b0c19490       kube-scheduler-embed-certs-606645            kube-system
	0a2a7a1d8d67a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9e26eceab49fa       kube-controller-manager-embed-certs-606645   kube-system
	
	
	==> coredns [405f4d918cd3218dab5ca5ba6c31edeab3148ece6d25ede82944a7c18f68baa2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42644 - 63334 "HINFO IN 177001985715407124.4248968416130634947. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013575343s
	
	
	==> describe nodes <==
	Name:               embed-certs-606645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-606645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-606645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-606645
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:17:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:17:14 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:17:14 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:17:14 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:17:14 +0000   Sat, 08 Nov 2025 10:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-606645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                64b557bb-52b3-4c19-9c89-a18ac4cd988b
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-t2frl                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-606645                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-tb5h7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-606645             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-606645    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-tvxrb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-606645             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-606645 event: Registered Node embed-certs-606645 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-606645 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 09:51] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8355051bd841ed6bff5e3846f1dff80829400559c2b750280949f78d1bb1b258] <==
	{"level":"warn","ts":"2025-11-08T10:16:23.783935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.785836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.801473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.823807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.843465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.855684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.875750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.899763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.934473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.969063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:23.983505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.006335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.017301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.043041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.066659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.073562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.089115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.146040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.172050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.194283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.194590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.240992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.270271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.291716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:16:24.396288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54962","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:17:29 up  2:59,  0 user,  load average: 5.07, 3.96, 2.83
	Linux embed-certs-606645 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b04f4cea3ddad88892bdb6eba39cc5ab34d2f6227504ed7eafed47ddac5038e] <==
	I1108 10:16:33.913133       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:16:33.913401       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:16:33.913532       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:16:33.913545       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:16:33.913556       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:16:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:16:34.115763       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:16:34.115789       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:16:34.115798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:16:34.116086       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:17:04.035252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:17:04.116059       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:17:04.116090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:17:04.117152       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1108 10:17:05.216560       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:17:05.216679       1 metrics.go:72] Registering metrics
	I1108 10:17:05.216789       1 controller.go:711] "Syncing nftables rules"
	I1108 10:17:14.041007       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:17:14.041056       1 main.go:301] handling current node
	I1108 10:17:24.037015       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:17:24.037099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [188cbfef977c0fc59356c77dfa17e8afc0a9074ae6b36bcb7822dc3a464b835f] <==
	I1108 10:16:25.518279       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:16:25.538880       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:25.546211       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:16:25.586994       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:25.587142       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:16:25.589260       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:16:25.653183       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:16:26.218705       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:16:26.227101       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:16:26.227125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:16:26.951871       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:16:27.016496       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:16:27.122978       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:16:27.133058       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 10:16:27.134177       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:16:27.148099       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:16:27.313949       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:16:28.148559       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:16:28.168129       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:16:28.192013       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:16:32.944865       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 10:16:33.116626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:16:33.566295       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:16:33.600803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1108 10:17:26.790724       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37966: use of closed network connection
	
	
	==> kube-controller-manager [0a2a7a1d8d67a0b067b0d9bf886ecedae15b73dccfe16a65256c966b27b20f26] <==
	I1108 10:16:32.352566       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:16:32.353057       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:16:32.353160       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:16:32.353304       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:16:32.353367       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:16:32.353406       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:16:32.353570       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:16:32.353762       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:16:32.354189       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 10:16:32.354306       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:16:32.356818       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:16:32.364113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:16:32.364311       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:16:32.393899       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:16:32.402032       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:16:32.402045       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:16:32.402165       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:16:32.402172       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:16:32.402583       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:16:32.403359       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:16:32.404957       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:16:32.408697       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:16:32.409718       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:16:32.411865       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:17:17.405848       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9a3b45c0d84a956c6790259e559b53eb0b3d5fe79eda2938ba35bb8b5d052c13] <==
	I1108 10:16:33.908682       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:16:34.006780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:16:34.107508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:16:34.107548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:16:34.107621       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:16:34.341402       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:16:34.341490       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:16:34.345716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:16:34.346073       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:16:34.346100       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:16:34.347613       1 config.go:200] "Starting service config controller"
	I1108 10:16:34.347636       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:16:34.347653       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:16:34.347657       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:16:34.347667       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:16:34.347671       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:16:34.348265       1 config.go:309] "Starting node config controller"
	I1108 10:16:34.348282       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:16:34.348288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:16:34.448061       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:16:34.448107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:16:34.448158       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1e0bd33fdd6d9c42577203176b17b8be27928a1724fb90fd04920bcab30fbfad] <==
	I1108 10:16:25.672288       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1108 10:16:25.698692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 10:16:25.700077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:16:25.700125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:16:25.700181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:16:25.701258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:16:25.700700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:16:25.711565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:16:25.711811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:16:25.711902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:16:25.713510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:16:25.713576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:16:25.713642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:16:25.713675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:16:25.713969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:16:25.714021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:16:25.714075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:16:25.714313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:16:25.714391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:16:25.714588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:16:26.594818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:16:26.619419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:16:26.659429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:16:26.660872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:16:29.571946       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:16:29 embed-certs-606645 kubelet[1308]: I1108 10:16:29.238693    1308 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-embed-certs-606645"
	Nov 08 10:16:29 embed-certs-606645 kubelet[1308]: E1108 10:16:29.258259    1308 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-606645\" already exists" pod="kube-system/kube-apiserver-embed-certs-606645"
	Nov 08 10:16:32 embed-certs-606645 kubelet[1308]: I1108 10:16:32.416593    1308 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 10:16:32 embed-certs-606645 kubelet[1308]: I1108 10:16:32.418463    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256156    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ac67495-1d1e-481c-bf20-c9ccf1d66d41-kube-proxy\") pod \"kube-proxy-tvxrb\" (UID: \"0ac67495-1d1e-481c-bf20-c9ccf1d66d41\") " pod="kube-system/kube-proxy-tvxrb"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256229    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ac67495-1d1e-481c-bf20-c9ccf1d66d41-xtables-lock\") pod \"kube-proxy-tvxrb\" (UID: \"0ac67495-1d1e-481c-bf20-c9ccf1d66d41\") " pod="kube-system/kube-proxy-tvxrb"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256322    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/693ec6c4-791c-4411-a276-f4bfbdfb845e-cni-cfg\") pod \"kindnet-tb5h7\" (UID: \"693ec6c4-791c-4411-a276-f4bfbdfb845e\") " pod="kube-system/kindnet-tb5h7"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256549    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693ec6c4-791c-4411-a276-f4bfbdfb845e-xtables-lock\") pod \"kindnet-tb5h7\" (UID: \"693ec6c4-791c-4411-a276-f4bfbdfb845e\") " pod="kube-system/kindnet-tb5h7"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256601    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693ec6c4-791c-4411-a276-f4bfbdfb845e-lib-modules\") pod \"kindnet-tb5h7\" (UID: \"693ec6c4-791c-4411-a276-f4bfbdfb845e\") " pod="kube-system/kindnet-tb5h7"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256622    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6cw\" (UniqueName: \"kubernetes.io/projected/693ec6c4-791c-4411-a276-f4bfbdfb845e-kube-api-access-6z6cw\") pod \"kindnet-tb5h7\" (UID: \"693ec6c4-791c-4411-a276-f4bfbdfb845e\") " pod="kube-system/kindnet-tb5h7"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256646    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ac67495-1d1e-481c-bf20-c9ccf1d66d41-lib-modules\") pod \"kube-proxy-tvxrb\" (UID: \"0ac67495-1d1e-481c-bf20-c9ccf1d66d41\") " pod="kube-system/kube-proxy-tvxrb"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.256662    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbgg\" (UniqueName: \"kubernetes.io/projected/0ac67495-1d1e-481c-bf20-c9ccf1d66d41-kube-api-access-gpbgg\") pod \"kube-proxy-tvxrb\" (UID: \"0ac67495-1d1e-481c-bf20-c9ccf1d66d41\") " pod="kube-system/kube-proxy-tvxrb"
	Nov 08 10:16:33 embed-certs-606645 kubelet[1308]: I1108 10:16:33.573904    1308 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:16:34 embed-certs-606645 kubelet[1308]: I1108 10:16:34.335948    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvxrb" podStartSLOduration=1.335928419 podStartE2EDuration="1.335928419s" podCreationTimestamp="2025-11-08 10:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:16:34.306178912 +0000 UTC m=+6.317813045" watchObservedRunningTime="2025-11-08 10:16:34.335928419 +0000 UTC m=+6.347562552"
	Nov 08 10:16:38 embed-certs-606645 kubelet[1308]: I1108 10:16:38.224495    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tb5h7" podStartSLOduration=6.224478773 podStartE2EDuration="6.224478773s" podCreationTimestamp="2025-11-08 10:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:16:34.368458199 +0000 UTC m=+6.380092349" watchObservedRunningTime="2025-11-08 10:16:38.224478773 +0000 UTC m=+10.236112906"
	Nov 08 10:17:14 embed-certs-606645 kubelet[1308]: I1108 10:17:14.560243    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:17:14 embed-certs-606645 kubelet[1308]: I1108 10:17:14.668611    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f82be00b-3c38-44dc-afef-f1e2434ae470-tmp\") pod \"storage-provisioner\" (UID: \"f82be00b-3c38-44dc-afef-f1e2434ae470\") " pod="kube-system/storage-provisioner"
	Nov 08 10:17:14 embed-certs-606645 kubelet[1308]: I1108 10:17:14.668662    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcn6\" (UniqueName: \"kubernetes.io/projected/f82be00b-3c38-44dc-afef-f1e2434ae470-kube-api-access-tvcn6\") pod \"storage-provisioner\" (UID: \"f82be00b-3c38-44dc-afef-f1e2434ae470\") " pod="kube-system/storage-provisioner"
	Nov 08 10:17:14 embed-certs-606645 kubelet[1308]: I1108 10:17:14.668687    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e22d81d9-6568-4569-908f-cefa38ef9b76-config-volume\") pod \"coredns-66bc5c9577-t2frl\" (UID: \"e22d81d9-6568-4569-908f-cefa38ef9b76\") " pod="kube-system/coredns-66bc5c9577-t2frl"
	Nov 08 10:17:14 embed-certs-606645 kubelet[1308]: I1108 10:17:14.668707    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mqc\" (UniqueName: \"kubernetes.io/projected/e22d81d9-6568-4569-908f-cefa38ef9b76-kube-api-access-v4mqc\") pod \"coredns-66bc5c9577-t2frl\" (UID: \"e22d81d9-6568-4569-908f-cefa38ef9b76\") " pod="kube-system/coredns-66bc5c9577-t2frl"
	Nov 08 10:17:15 embed-certs-606645 kubelet[1308]: W1108 10:17:15.031180    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-dd7928135402472ceb0ec9085729c5ebc30984801bbc833ff98c40bfe4487c2b WatchSource:0}: Error finding container dd7928135402472ceb0ec9085729c5ebc30984801bbc833ff98c40bfe4487c2b: Status 404 returned error can't find the container with id dd7928135402472ceb0ec9085729c5ebc30984801bbc833ff98c40bfe4487c2b
	Nov 08 10:17:15 embed-certs-606645 kubelet[1308]: I1108 10:17:15.458428    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t2frl" podStartSLOduration=42.458411307 podStartE2EDuration="42.458411307s" podCreationTimestamp="2025-11-08 10:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:17:15.430080451 +0000 UTC m=+47.441714600" watchObservedRunningTime="2025-11-08 10:17:15.458411307 +0000 UTC m=+47.470045440"
	Nov 08 10:17:15 embed-certs-606645 kubelet[1308]: I1108 10:17:15.483619    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.483599319 podStartE2EDuration="41.483599319s" podCreationTimestamp="2025-11-08 10:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:17:15.460125866 +0000 UTC m=+47.471760015" watchObservedRunningTime="2025-11-08 10:17:15.483599319 +0000 UTC m=+47.495233460"
	Nov 08 10:17:17 embed-certs-606645 kubelet[1308]: I1108 10:17:17.691725    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6bc\" (UniqueName: \"kubernetes.io/projected/fda9aeba-3ce7-41ea-9797-1de68d199925-kube-api-access-2b6bc\") pod \"busybox\" (UID: \"fda9aeba-3ce7-41ea-9797-1de68d199925\") " pod="default/busybox"
	Nov 08 10:17:17 embed-certs-606645 kubelet[1308]: W1108 10:17:17.934331    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25 WatchSource:0}: Error finding container 1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25: Status 404 returned error can't find the container with id 1b9cf91d561450c10878af97fd98562004c0cb7857ed2f091b6e3252581f7f25
	
	
	==> storage-provisioner [e7ba45a51016eb79f282d70a252316ad041622e90c0cbd6de281c5a78c7e34fb] <==
	I1108 10:17:15.163792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:17:15.190911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:17:15.191580       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:17:15.201432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:15.213034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:17:15.259359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:17:15.260022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_ba97b983-da9d-4844-9755-c21af3475a9b!
	I1108 10:17:15.259857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87e0990b-37ee-4c3a-94da-724d0f4a2331", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-606645_ba97b983-da9d-4844-9755-c21af3475a9b became leader
	W1108 10:17:15.277477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:15.286391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:17:15.360660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_ba97b983-da9d-4844-9755-c21af3475a9b!
	W1108 10:17:17.290074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:17.295106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:19.299584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:19.304969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:21.308717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:21.314064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:23.317210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:23.323192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:25.326575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:25.333477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:27.336598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:27.344862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:29.355956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:29.365576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-606645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-872727 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-872727 --alsologtostderr -v=1: exit status 80 (2.273439924s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-872727 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:18:01.140173  490423 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:18:01.140397  490423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:01.140429  490423 out.go:374] Setting ErrFile to fd 2...
	I1108 10:18:01.140448  490423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:01.140772  490423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:18:01.141253  490423 out.go:368] Setting JSON to false
	I1108 10:18:01.141310  490423 mustload.go:66] Loading cluster: no-preload-872727
	I1108 10:18:01.141753  490423 config.go:182] Loaded profile config "no-preload-872727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:01.142281  490423 cli_runner.go:164] Run: docker container inspect no-preload-872727 --format={{.State.Status}}
	I1108 10:18:01.163621  490423 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:18:01.163962  490423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:01.285457  490423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-08 10:18:01.273825171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:01.286227  490423 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-872727 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:18:01.291886  490423 out.go:179] * Pausing node no-preload-872727 ... 
	I1108 10:18:01.297880  490423 host.go:66] Checking if "no-preload-872727" exists ...
	I1108 10:18:01.298214  490423 ssh_runner.go:195] Run: systemctl --version
	I1108 10:18:01.298260  490423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-872727
	I1108 10:18:01.328139  490423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/no-preload-872727/id_rsa Username:docker}
	I1108 10:18:01.445851  490423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:01.462867  490423 pause.go:52] kubelet running: true
	I1108 10:18:01.462938  490423 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:01.759188  490423 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:01.759282  490423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:01.887703  490423 cri.go:89] found id: "f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95"
	I1108 10:18:01.887735  490423 cri.go:89] found id: "d8a6e6955b1700e728b0506a7d873f21785124dd5f3c6ce00ed73c7412fb24e7"
	I1108 10:18:01.887741  490423 cri.go:89] found id: "a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220"
	I1108 10:18:01.887746  490423 cri.go:89] found id: "4a7f5f22e728f39f9a0f36bc691d475caae9deb5d2b1bc5741b93a7fb1a4320e"
	I1108 10:18:01.887749  490423 cri.go:89] found id: "ec2b9322a1de4af91aed5a8283aa5006a918d9ec578e981fe89fc2d4684ee922"
	I1108 10:18:01.887752  490423 cri.go:89] found id: "07e3896f175cbc700250d85b4144c1a9d57dd773a77aaa820c8f3638851a6914"
	I1108 10:18:01.887756  490423 cri.go:89] found id: "c2aabe05d680cabcaa20b0665b445667d7738cbdb6f133edcb0233dc3bbc9d6b"
	I1108 10:18:01.887758  490423 cri.go:89] found id: "25640dc0fff195b37a317ec2cae1b3fac7db485a4e609e296a62be1978b92dec"
	I1108 10:18:01.887767  490423 cri.go:89] found id: "af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b"
	I1108 10:18:01.887775  490423 cri.go:89] found id: "8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	I1108 10:18:01.887778  490423 cri.go:89] found id: "52b5032212e00b125297bb977a888b5c53005489413ae8da2f80e4d3ee09b028"
	I1108 10:18:01.887782  490423 cri.go:89] found id: ""
	I1108 10:18:01.887839  490423 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:01.904986  490423 retry.go:31] will retry after 187.311883ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:18:02.093211  490423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:02.111093  490423 pause.go:52] kubelet running: false
	I1108 10:18:02.111160  490423 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:02.398384  490423 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:02.398463  490423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:02.483460  490423 cri.go:89] found id: "f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95"
	I1108 10:18:02.483481  490423 cri.go:89] found id: "d8a6e6955b1700e728b0506a7d873f21785124dd5f3c6ce00ed73c7412fb24e7"
	I1108 10:18:02.483487  490423 cri.go:89] found id: "a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220"
	I1108 10:18:02.483492  490423 cri.go:89] found id: "4a7f5f22e728f39f9a0f36bc691d475caae9deb5d2b1bc5741b93a7fb1a4320e"
	I1108 10:18:02.483495  490423 cri.go:89] found id: "ec2b9322a1de4af91aed5a8283aa5006a918d9ec578e981fe89fc2d4684ee922"
	I1108 10:18:02.483499  490423 cri.go:89] found id: "07e3896f175cbc700250d85b4144c1a9d57dd773a77aaa820c8f3638851a6914"
	I1108 10:18:02.483502  490423 cri.go:89] found id: "c2aabe05d680cabcaa20b0665b445667d7738cbdb6f133edcb0233dc3bbc9d6b"
	I1108 10:18:02.483504  490423 cri.go:89] found id: "25640dc0fff195b37a317ec2cae1b3fac7db485a4e609e296a62be1978b92dec"
	I1108 10:18:02.483515  490423 cri.go:89] found id: "af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b"
	I1108 10:18:02.483522  490423 cri.go:89] found id: "8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	I1108 10:18:02.483527  490423 cri.go:89] found id: "52b5032212e00b125297bb977a888b5c53005489413ae8da2f80e4d3ee09b028"
	I1108 10:18:02.483530  490423 cri.go:89] found id: ""
	I1108 10:18:02.483593  490423 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:02.498995  490423 retry.go:31] will retry after 390.565963ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:02Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:18:02.890235  490423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:02.916883  490423 pause.go:52] kubelet running: false
	I1108 10:18:02.916989  490423 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:03.160692  490423 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:03.160811  490423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:03.310183  490423 cri.go:89] found id: "f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95"
	I1108 10:18:03.310223  490423 cri.go:89] found id: "d8a6e6955b1700e728b0506a7d873f21785124dd5f3c6ce00ed73c7412fb24e7"
	I1108 10:18:03.310228  490423 cri.go:89] found id: "a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220"
	I1108 10:18:03.310232  490423 cri.go:89] found id: "4a7f5f22e728f39f9a0f36bc691d475caae9deb5d2b1bc5741b93a7fb1a4320e"
	I1108 10:18:03.310235  490423 cri.go:89] found id: "ec2b9322a1de4af91aed5a8283aa5006a918d9ec578e981fe89fc2d4684ee922"
	I1108 10:18:03.310239  490423 cri.go:89] found id: "07e3896f175cbc700250d85b4144c1a9d57dd773a77aaa820c8f3638851a6914"
	I1108 10:18:03.310242  490423 cri.go:89] found id: "c2aabe05d680cabcaa20b0665b445667d7738cbdb6f133edcb0233dc3bbc9d6b"
	I1108 10:18:03.310245  490423 cri.go:89] found id: "25640dc0fff195b37a317ec2cae1b3fac7db485a4e609e296a62be1978b92dec"
	I1108 10:18:03.310247  490423 cri.go:89] found id: "af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b"
	I1108 10:18:03.310254  490423 cri.go:89] found id: "8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	I1108 10:18:03.310257  490423 cri.go:89] found id: "52b5032212e00b125297bb977a888b5c53005489413ae8da2f80e4d3ee09b028"
	I1108 10:18:03.310260  490423 cri.go:89] found id: ""
	I1108 10:18:03.310334  490423 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:03.331222  490423 out.go:203] 
	W1108 10:18:03.334662  490423 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:18:03.334691  490423 out.go:285] * 
	* 
	W1108 10:18:03.342130  490423 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:18:03.345745  490423 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-872727 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-872727
helpers_test.go:243: (dbg) docker inspect no-preload-872727:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	        "Created": "2025-11-08T10:15:21.269248431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:17:01.162937769Z",
	            "FinishedAt": "2025-11-08T10:16:59.998032434Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hosts",
	        "LogPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662-json.log",
	        "Name": "/no-preload-872727",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-872727:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-872727",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	                "LowerDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-872727",
	                "Source": "/var/lib/docker/volumes/no-preload-872727/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-872727",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-872727",
	                "name.minikube.sigs.k8s.io": "no-preload-872727",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "455bc2abdfea372fbabce33f72e1c03326d7f813db96590e6af5c5361ab4b7e1",
	            "SandboxKey": "/var/run/docker/netns/455bc2abdfea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-872727": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:c5:ca:26:27:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d3d5cc4896cbfd283044d2bbac6b28bc7f91508576b47e0339f3f688dde7413",
	                    "EndpointID": "ab7090c93bf7d203bd551c93dc3b8d9a074498f1988a18659a8a970f389e4d7d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-872727",
	                        "a3d97acc3509"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727: exit status 2 (481.55822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-872727 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-872727 logs -n 25: (1.69589505s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440    │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440    │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:17:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:17:42.483940  488441 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:17:42.484297  488441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:42.484331  488441 out.go:374] Setting ErrFile to fd 2...
	I1108 10:17:42.484352  488441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:42.484749  488441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:17:42.485203  488441 out.go:368] Setting JSON to false
	I1108 10:17:42.486209  488441 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10812,"bootTime":1762586251,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:17:42.486311  488441 start.go:143] virtualization:  
	I1108 10:17:42.491399  488441 out.go:179] * [embed-certs-606645] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:17:42.494650  488441 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:17:42.494724  488441 notify.go:221] Checking for updates...
	I1108 10:17:42.500683  488441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:17:42.503701  488441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:42.506682  488441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:17:42.509910  488441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:17:42.512889  488441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:17:42.516309  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:42.516944  488441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:17:42.550239  488441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:17:42.550354  488441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:42.615350  488441 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:42.605243378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:42.615541  488441 docker.go:319] overlay module found
	I1108 10:17:42.618805  488441 out.go:179] * Using the docker driver based on existing profile
	I1108 10:17:42.621805  488441 start.go:309] selected driver: docker
	I1108 10:17:42.621832  488441 start.go:930] validating driver "docker" against &{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:42.621938  488441 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:17:42.622742  488441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:42.685996  488441 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:42.67659548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:42.686361  488441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:42.686421  488441 cni.go:84] Creating CNI manager for ""
	I1108 10:17:42.686487  488441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:42.686534  488441 start.go:353] cluster config:
	{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:42.691422  488441 out.go:179] * Starting "embed-certs-606645" primary control-plane node in "embed-certs-606645" cluster
	I1108 10:17:42.694451  488441 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:17:42.697691  488441 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:17:42.700640  488441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:42.700715  488441 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:17:42.700727  488441 cache.go:59] Caching tarball of preloaded images
	I1108 10:17:42.700729  488441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:17:42.700819  488441 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:17:42.700829  488441 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:17:42.700966  488441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:17:42.725999  488441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:17:42.726025  488441 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:17:42.726038  488441 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:17:42.726061  488441 start.go:360] acquireMachinesLock for embed-certs-606645: {Name:mke419d0c52d844252caf31cfbe575cf42b647de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:42.726121  488441 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "embed-certs-606645"
	I1108 10:17:42.726147  488441 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:17:42.726156  488441 fix.go:54] fixHost starting: 
	I1108 10:17:42.726418  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:42.743423  488441 fix.go:112] recreateIfNeeded on embed-certs-606645: state=Stopped err=<nil>
	W1108 10:17:42.743456  488441 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:17:42.198433  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	W1108 10:17:44.696649  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	I1108 10:17:42.746855  488441 out.go:252] * Restarting existing docker container for "embed-certs-606645" ...
	I1108 10:17:42.746975  488441 cli_runner.go:164] Run: docker start embed-certs-606645
	I1108 10:17:42.994066  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:43.027023  488441 kic.go:430] container "embed-certs-606645" state is running.
	I1108 10:17:43.027434  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:43.054091  488441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:17:43.054315  488441 machine.go:94] provisionDockerMachine start ...
	I1108 10:17:43.054378  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:43.077964  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:43.078300  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:43.078315  488441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:17:43.079805  488441 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58046->127.0.0.1:33443: read: connection reset by peer
	I1108 10:17:46.237159  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:17:46.237184  488441 ubuntu.go:182] provisioning hostname "embed-certs-606645"
	I1108 10:17:46.237269  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.256414  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.256733  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.256749  488441 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-606645 && echo "embed-certs-606645" | sudo tee /etc/hostname
	I1108 10:17:46.432593  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:17:46.432671  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.451731  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.452092  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.452115  488441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606645/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:17:46.611319  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:17:46.611402  488441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:17:46.611445  488441 ubuntu.go:190] setting up certificates
	I1108 10:17:46.611473  488441 provision.go:84] configureAuth start
	I1108 10:17:46.611600  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:46.628683  488441 provision.go:143] copyHostCerts
	I1108 10:17:46.628761  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:17:46.628776  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:17:46.628852  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:17:46.629052  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:17:46.629059  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:17:46.629093  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:17:46.629159  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:17:46.629176  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:17:46.629203  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:17:46.629284  488441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606645 san=[127.0.0.1 192.168.76.2 embed-certs-606645 localhost minikube]
	I1108 10:17:46.723100  488441 provision.go:177] copyRemoteCerts
	I1108 10:17:46.723171  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:17:46.723229  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.740438  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:46.849229  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:17:46.871040  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:17:46.890244  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 10:17:46.909552  488441 provision.go:87] duration metric: took 298.04915ms to configureAuth
	I1108 10:17:46.909584  488441 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:17:46.909810  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:46.909923  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.927269  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.927598  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.927618  488441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:17:47.276965  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:17:47.276996  488441 machine.go:97] duration metric: took 4.222660782s to provisionDockerMachine
	I1108 10:17:47.277007  488441 start.go:293] postStartSetup for "embed-certs-606645" (driver="docker")
	I1108 10:17:47.277018  488441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:17:47.277076  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:17:47.277129  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.305646  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.437080  488441 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:17:47.440432  488441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:17:47.440460  488441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:17:47.440471  488441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:17:47.440523  488441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:17:47.440602  488441 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:17:47.440706  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:17:47.447871  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:47.466888  488441 start.go:296] duration metric: took 189.865441ms for postStartSetup
	I1108 10:17:47.466980  488441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:17:47.467031  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.483842  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.585820  488441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:17:47.590435  488441 fix.go:56] duration metric: took 4.864272759s for fixHost
	I1108 10:17:47.590469  488441 start.go:83] releasing machines lock for "embed-certs-606645", held for 4.864335661s
	I1108 10:17:47.590538  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:47.607320  488441 ssh_runner.go:195] Run: cat /version.json
	I1108 10:17:47.607382  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.607649  488441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:17:47.607718  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.627274  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.627875  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.732799  488441 ssh_runner.go:195] Run: systemctl --version
	I1108 10:17:47.821822  488441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:17:47.861430  488441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:17:47.865836  488441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:17:47.865908  488441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:17:47.873874  488441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:17:47.873897  488441 start.go:496] detecting cgroup driver to use...
	I1108 10:17:47.873930  488441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:17:47.873974  488441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:17:47.889073  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:17:47.904231  488441 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:17:47.904302  488441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:17:47.920343  488441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:17:47.933601  488441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:17:48.063535  488441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:17:48.252051  488441 docker.go:234] disabling docker service ...
	I1108 10:17:48.252141  488441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:17:48.267648  488441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:17:48.282519  488441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:17:48.408066  488441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:17:48.530362  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:17:48.543011  488441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:17:48.557585  488441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:17:48.557711  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.566675  488441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:17:48.566794  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.575921  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.589361  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.599123  488441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:17:48.608346  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.619402  488441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.628870  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.638135  488441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:17:48.645611  488441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:17:48.653017  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:48.772841  488441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:17:48.913463  488441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:17:48.913578  488441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:17:48.917921  488441 start.go:564] Will wait 60s for crictl version
	I1108 10:17:48.918012  488441 ssh_runner.go:195] Run: which crictl
	I1108 10:17:48.921531  488441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:17:48.946816  488441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:17:48.946976  488441 ssh_runner.go:195] Run: crio --version
	I1108 10:17:48.975970  488441 ssh_runner.go:195] Run: crio --version
	I1108 10:17:49.010602  488441 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 10:17:46.697529  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	I1108 10:17:47.696584  485498 pod_ready.go:94] pod "coredns-66bc5c9577-7xnlf" is "Ready"
	I1108 10:17:47.696607  485498 pod_ready.go:86] duration metric: took 31.00512402s for pod "coredns-66bc5c9577-7xnlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.698605  485498 pod_ready.go:83] waiting for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.702337  485498 pod_ready.go:94] pod "etcd-no-preload-872727" is "Ready"
	I1108 10:17:47.702415  485498 pod_ready.go:86] duration metric: took 3.789137ms for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.704439  485498 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.710084  485498 pod_ready.go:94] pod "kube-apiserver-no-preload-872727" is "Ready"
	I1108 10:17:47.710104  485498 pod_ready.go:86] duration metric: took 5.645628ms for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.713183  485498 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.895280  485498 pod_ready.go:94] pod "kube-controller-manager-no-preload-872727" is "Ready"
	I1108 10:17:47.895306  485498 pod_ready.go:86] duration metric: took 182.031067ms for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.096716  485498 pod_ready.go:83] waiting for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.495484  485498 pod_ready.go:94] pod "kube-proxy-tl7z2" is "Ready"
	I1108 10:17:48.495507  485498 pod_ready.go:86] duration metric: took 398.762747ms for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.694762  485498 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:49.095921  485498 pod_ready.go:94] pod "kube-scheduler-no-preload-872727" is "Ready"
	I1108 10:17:49.095944  485498 pod_ready.go:86] duration metric: took 401.153259ms for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:49.095956  485498 pod_ready.go:40] duration metric: took 32.480689424s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:49.174662  485498 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:17:49.177609  485498 out.go:179] * Done! kubectl is now configured to use "no-preload-872727" cluster and "default" namespace by default
	I1108 10:17:49.013430  488441 cli_runner.go:164] Run: docker network inspect embed-certs-606645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:17:49.030494  488441 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:17:49.039697  488441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:49.049879  488441 kubeadm.go:884] updating cluster {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:17:49.049985  488441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:49.050037  488441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:17:49.090547  488441 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:17:49.090568  488441 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:17:49.090621  488441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:17:49.132497  488441 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:17:49.132519  488441 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:17:49.132527  488441 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:17:49.132627  488441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-606645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:17:49.132705  488441 ssh_runner.go:195] Run: crio config
	I1108 10:17:49.273763  488441 cni.go:84] Creating CNI manager for ""
	I1108 10:17:49.273787  488441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:49.273808  488441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:17:49.273832  488441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606645 NodeName:embed-certs-606645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:17:49.273963  488441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606645"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:17:49.274039  488441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:17:49.290059  488441 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:17:49.290117  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:17:49.299182  488441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:17:49.322319  488441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:17:49.340298  488441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:17:49.359038  488441 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:17:49.365662  488441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:49.376590  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:49.512227  488441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:49.533047  488441 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645 for IP: 192.168.76.2
	I1108 10:17:49.533068  488441 certs.go:195] generating shared ca certs ...
	I1108 10:17:49.533084  488441 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:49.533218  488441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:17:49.533274  488441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:17:49.533289  488441 certs.go:257] generating profile certs ...
	I1108 10:17:49.533382  488441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.key
	I1108 10:17:49.533446  488441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e
	I1108 10:17:49.533488  488441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key
	I1108 10:17:49.533605  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:17:49.533639  488441 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:17:49.533652  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:17:49.533685  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:17:49.533712  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:17:49.533738  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:17:49.533782  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:49.534390  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:17:49.558477  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:17:49.583644  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:17:49.620562  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:17:49.657822  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:17:49.713990  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:17:49.731993  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:17:49.750569  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:17:49.770044  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:17:49.789524  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:17:49.808655  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:17:49.827777  488441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:17:49.842236  488441 ssh_runner.go:195] Run: openssl version
	I1108 10:17:49.853933  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:17:49.864705  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.870268  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.870362  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.920517  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:17:49.929730  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:17:49.939331  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.943155  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.943243  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.986988  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:17:49.996257  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:17:50.007577  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.012806  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.012881  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.057773  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:17:50.066295  488441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:17:50.070733  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:17:50.113678  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:17:50.155274  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:17:50.197381  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:17:50.238771  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:17:50.280557  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:17:50.328175  488441 kubeadm.go:401] StartCluster: {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:50.328274  488441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:17:50.328334  488441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:17:50.404851  488441 cri.go:89] found id: ""
	I1108 10:17:50.404985  488441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:17:50.419026  488441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:17:50.419059  488441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:17:50.419127  488441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:17:50.434661  488441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:17:50.435320  488441 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-606645" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:50.435587  488441 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-606645" cluster setting kubeconfig missing "embed-certs-606645" context setting]
	I1108 10:17:50.436152  488441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.440447  488441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:17:50.455296  488441 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:17:50.455370  488441 kubeadm.go:602] duration metric: took 36.296202ms to restartPrimaryControlPlane
	I1108 10:17:50.455400  488441 kubeadm.go:403] duration metric: took 127.233818ms to StartCluster
	I1108 10:17:50.455451  488441 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.455538  488441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:50.457648  488441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.458778  488441 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:17:50.459855  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:50.459905  488441 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:17:50.460032  488441 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-606645"
	I1108 10:17:50.460044  488441 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-606645"
	W1108 10:17:50.460050  488441 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:17:50.460084  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.460599  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.463498  488441 addons.go:70] Setting default-storageclass=true in profile "embed-certs-606645"
	I1108 10:17:50.463528  488441 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606645"
	I1108 10:17:50.467183  488441 addons.go:70] Setting dashboard=true in profile "embed-certs-606645"
	I1108 10:17:50.467216  488441 addons.go:239] Setting addon dashboard=true in "embed-certs-606645"
	W1108 10:17:50.467224  488441 addons.go:248] addon dashboard should already be in state true
	I1108 10:17:50.467256  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.467709  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.468301  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.469387  488441 out.go:179] * Verifying Kubernetes components...
	I1108 10:17:50.480495  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:50.554717  488441 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:17:50.554822  488441 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:17:50.557658  488441 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:50.557679  488441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:17:50.557751  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.565047  488441 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:17:50.565968  488441 addons.go:239] Setting addon default-storageclass=true in "embed-certs-606645"
	W1108 10:17:50.565989  488441 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:17:50.566015  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.566491  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.568588  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:17:50.568607  488441 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:17:50.568672  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.616416  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.629895  488441 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:50.629920  488441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:17:50.629980  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.645233  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.662405  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.842499  488441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:50.889504  488441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:17:50.936985  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:50.940803  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:51.027499  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:17:51.027570  488441 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:17:51.114295  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:17:51.114373  488441 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:17:51.198940  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:17:51.199025  488441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:17:51.240240  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:17:51.240313  488441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:17:51.267623  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:17:51.267723  488441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:17:51.290489  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:17:51.290563  488441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:17:51.316685  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:17:51.316756  488441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:17:51.339524  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:17:51.339595  488441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:17:51.363739  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:17:51.363808  488441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:17:51.383091  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:17:54.699109  488441 node_ready.go:49] node "embed-certs-606645" is "Ready"
	I1108 10:17:54.699136  488441 node_ready.go:38] duration metric: took 3.809550776s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:17:54.699151  488441 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:17:54.699210  488441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:17:55.824074  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.887015389s)
	I1108 10:17:55.824140  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.883276115s)
	I1108 10:17:55.824512  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.441339442s)
	I1108 10:17:55.825234  488441 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.126009571s)
	I1108 10:17:55.825262  488441 api_server.go:72] duration metric: took 5.366399467s to wait for apiserver process to appear ...
	I1108 10:17:55.825269  488441 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:17:55.825282  488441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:17:55.827531  488441 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-606645 addons enable metrics-server
	
	I1108 10:17:55.839107  488441 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:17:55.839189  488441 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:17:55.851449  488441 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:17:55.854350  488441 addons.go:515] duration metric: took 5.394427496s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:17:56.326135  488441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:17:56.336060  488441 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:17:56.337176  488441 api_server.go:141] control plane version: v1.34.1
	I1108 10:17:56.337210  488441 api_server.go:131] duration metric: took 511.934204ms to wait for apiserver health ...
	I1108 10:17:56.337249  488441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:17:56.341236  488441 system_pods.go:59] 8 kube-system pods found
	I1108 10:17:56.341280  488441 system_pods.go:61] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:56.341292  488441 system_pods.go:61] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:56.341312  488441 system_pods.go:61] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:17:56.341320  488441 system_pods.go:61] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:56.341332  488441 system_pods.go:61] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:56.341339  488441 system_pods.go:61] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:17:56.341346  488441 system_pods.go:61] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:56.341360  488441 system_pods.go:61] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:56.341370  488441 system_pods.go:74] duration metric: took 4.110748ms to wait for pod list to return data ...
	I1108 10:17:56.341382  488441 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:17:56.344026  488441 default_sa.go:45] found service account: "default"
	I1108 10:17:56.344050  488441 default_sa.go:55] duration metric: took 2.660676ms for default service account to be created ...
	I1108 10:17:56.344060  488441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:17:56.347483  488441 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:56.347517  488441 system_pods.go:89] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:56.347527  488441 system_pods.go:89] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:56.347537  488441 system_pods.go:89] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:17:56.347546  488441 system_pods.go:89] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:56.347553  488441 system_pods.go:89] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:56.347560  488441 system_pods.go:89] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:17:56.347579  488441 system_pods.go:89] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:56.347590  488441 system_pods.go:89] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:56.347600  488441 system_pods.go:126] duration metric: took 3.533922ms to wait for k8s-apps to be running ...
	I1108 10:17:56.347613  488441 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:17:56.347671  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:17:56.361028  488441 system_svc.go:56] duration metric: took 13.405747ms WaitForService to wait for kubelet
	I1108 10:17:56.361103  488441 kubeadm.go:587] duration metric: took 5.902238119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:56.361135  488441 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:17:56.365614  488441 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:17:56.365694  488441 node_conditions.go:123] node cpu capacity is 2
	I1108 10:17:56.365721  488441 node_conditions.go:105] duration metric: took 4.548758ms to run NodePressure ...
	I1108 10:17:56.365746  488441 start.go:242] waiting for startup goroutines ...
	I1108 10:17:56.365790  488441 start.go:247] waiting for cluster config update ...
	I1108 10:17:56.365815  488441 start.go:256] writing updated cluster config ...
	I1108 10:17:56.366134  488441 ssh_runner.go:195] Run: rm -f paused
	I1108 10:17:56.370147  488441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:56.373779  488441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:17:58.379082  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:00.382304  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:17:41 no-preload-872727 crio[651]: time="2025-11-08T10:17:41.356157254Z" level=info msg="Removed container d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd/dashboard-metrics-scraper" id=7433b73b-d67d-471c-8b21-e10e166fa3e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:17:46 no-preload-872727 conmon[1146]: conmon a101294ff5a06d18c6fe <ninfo>: container 1149 exited with status 1
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.353669222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=80a24fac-0a84-4428-aa15-8eaefe6bd5b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.3545751Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f52c4f14-305d-483a-ad98-6eb16add6c92 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.355439256Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=351e58e6-7802-4bd7-a8f6-8d22c24757d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.355530776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364259304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364546034Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/caff05c8e6d03e6f4193c468106d68f8c820f068628a13e9c5a7159dfd11d367/merged/etc/passwd: no such file or directory"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364634371Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/caff05c8e6d03e6f4193c468106d68f8c820f068628a13e9c5a7159dfd11d367/merged/etc/group: no such file or directory"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364975125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.384785415Z" level=info msg="Created container f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95: kube-system/storage-provisioner/storage-provisioner" id=351e58e6-7802-4bd7-a8f6-8d22c24757d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.38554296Z" level=info msg="Starting container: f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95" id=6a648da4-2a99-43da-b16e-8d515bbf2ae9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.38783811Z" level=info msg="Started container" PID=1643 containerID=f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95 description=kube-system/storage-provisioner/storage-provisioner id=6a648da4-2a99-43da-b16e-8d515bbf2ae9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=673b694be03b780ed8f59eaa4b9ff20ebd6d2f8bb155f8ee50451fe956cbfc75
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.215385527Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219798357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219832442Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219855547Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223781744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223815008Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223838655Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227908525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227943241Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227969465Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.239394058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.239588373Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f334ef93153e7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           18 seconds ago      Running             storage-provisioner         2                   673b694be03b7       storage-provisioner                          kube-system
	8e40016b4466d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   d5a08333b2ac4       dashboard-metrics-scraper-6ffb444bf9-gqtzd   kubernetes-dashboard
	52b5032212e00       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   b9359a5394efb       kubernetes-dashboard-855c9754f9-q4gsc        kubernetes-dashboard
	9488d464568f4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   c38fde1107026       busybox                                      default
	d8a6e6955b170       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   88031f1d74b8f       coredns-66bc5c9577-7xnlf                     kube-system
	a101294ff5a06       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           48 seconds ago      Exited              storage-provisioner         1                   673b694be03b7       storage-provisioner                          kube-system
	4a7f5f22e728f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   4fa8d0dfda9d8       kube-proxy-tl7z2                             kube-system
	ec2b9322a1de4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   a51bc3803df47       kindnet-lld9n                                kube-system
	07e3896f175cb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           55 seconds ago      Running             kube-controller-manager     1                   f8cf5c99088c5       kube-controller-manager-no-preload-872727    kube-system
	c2aabe05d680c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           55 seconds ago      Running             kube-scheduler              1                   a2018b927e18a       kube-scheduler-no-preload-872727             kube-system
	25640dc0fff19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           55 seconds ago      Running             etcd                        1                   63554448201b6       etcd-no-preload-872727                       kube-system
	af4ee873dcf3a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   77d208130cf09       kube-apiserver-no-preload-872727             kube-system
	
	
	==> coredns [d8a6e6955b1700e728b0506a7d873f21785124dd5f3c6ce00ed73c7412fb24e7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60674 - 33592 "HINFO IN 4862426244402300975.1767909836293083321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030885776s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-872727
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-872727
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-872727
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-872727
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-872727
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                f5ae8ced-8225-4268-ba4a-f32dd64e1a62
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-7xnlf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-no-preload-872727                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-lld9n                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-872727              250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-872727     200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-tl7z2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-872727              100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gqtzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q4gsc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 113s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    112s                 kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     112s                 kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  112s                 kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-872727 event: Registered Node no-preload-872727 in Controller
	  Normal   NodeReady                92s                  kubelet          Node no-preload-872727 status is now: NodeReady
	  Normal   Starting                 56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                  node-controller  Node no-preload-872727 event: Registered Node no-preload-872727 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [25640dc0fff195b37a317ec2cae1b3fac7db485a4e609e296a62be1978b92dec] <==
	{"level":"warn","ts":"2025-11-08T10:17:11.971543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:11.988634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.033515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.061038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.134596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.136553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.169829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.183279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.206770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.245808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.256767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.290462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.321400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.342564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.357790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.383992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.399460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.416854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.442418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.502715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.523598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.554506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.615090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.659034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.737720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37102","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:05 up  3:00,  0 user,  load average: 5.28, 4.09, 2.91
	Linux no-preload-872727 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec2b9322a1de4af91aed5a8283aa5006a918d9ec578e981fe89fc2d4684ee922] <==
	I1108 10:17:16.017986       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:17:16.018423       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:17:16.018584       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:17:16.018628       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:17:16.018669       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:17:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:17:16.214669       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:17:16.214766       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:17:16.214800       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:17:16.215359       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:17:46.216846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:17:46.216880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:17:46.217074       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:17:46.218054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1108 10:17:47.715806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:17:47.715920       1 metrics.go:72] Registering metrics
	I1108 10:17:47.716004       1 controller.go:711] "Syncing nftables rules"
	I1108 10:17:56.215031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:17:56.215085       1 main.go:301] handling current node
	
	
	==> kube-apiserver [af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b] <==
	I1108 10:17:14.165931       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:17:14.165953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:17:14.178896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:17:14.179158       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:17:14.179437       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:17:14.179564       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:17:14.179579       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:17:14.179761       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:17:14.179791       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:17:14.179798       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:17:14.179803       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:17:14.180349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:17:14.216094       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1108 10:17:14.247121       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:17:14.571480       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:17:15.154346       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:17:15.199484       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:17:15.404644       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:17:15.570028       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:17:15.636817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:17:15.933266       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.255.172"}
	I1108 10:17:15.970444       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.165.146"}
	I1108 10:17:18.235079       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:17:18.534989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:17:18.636747       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [07e3896f175cbc700250d85b4144c1a9d57dd773a77aaa820c8f3638851a6914] <==
	I1108 10:17:18.122076       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:17:18.125313       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:17:18.128011       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:17:18.128767       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:17:18.128841       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:17:18.128852       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:17:18.128864       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:17:18.129242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:17:18.129425       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:17:18.130681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:17:18.130762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:17:18.130805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:17:18.132076       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:17:18.132121       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:17:18.132151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:17:18.132888       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:17:18.167988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:17:18.172425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:17:18.176398       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:17:18.176568       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:17:18.184083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-872727"
	I1108 10:17:18.184780       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:17:18.184826       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:17:18.186189       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:17:18.187207       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [4a7f5f22e728f39f9a0f36bc691d475caae9deb5d2b1bc5741b93a7fb1a4320e] <==
	I1108 10:17:16.055459       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:17:16.202717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:17:16.304995       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:17:16.305797       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:17:16.305943       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:17:16.356334       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:17:16.356467       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:17:16.362778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:17:16.363147       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:17:16.363172       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:16.364302       1 config.go:200] "Starting service config controller"
	I1108 10:17:16.364366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:17:16.371858       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:17:16.371962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:17:16.372012       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:17:16.372039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:17:16.375745       1 config.go:309] "Starting node config controller"
	I1108 10:17:16.377014       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:17:16.377086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:17:16.465292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:17:16.472645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:17:16.472758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c2aabe05d680cabcaa20b0665b445667d7738cbdb6f133edcb0233dc3bbc9d6b] <==
	I1108 10:17:13.005794       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:17:16.161937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:17:16.162039       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:16.169314       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:17:16.169437       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:17:16.169516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.169549       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.169598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:17:16.169649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:17:16.169812       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:17:16.169942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:17:16.271147       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:17:16.271596       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.277056       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865757     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf98k\" (UniqueName: \"kubernetes.io/projected/9f8ea253-398d-4f7a-abbc-90ac9d766530-kube-api-access-mf98k\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqtzd\" (UID: \"9f8ea253-398d-4f7a-abbc-90ac9d766530\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865823     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f8ea253-398d-4f7a-abbc-90ac9d766530-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqtzd\" (UID: \"9f8ea253-398d-4f7a-abbc-90ac9d766530\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865848     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/819ad1c3-65e1-4aa1-9ef5-cc4151ca68be-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-q4gsc\" (UID: \"819ad1c3-65e1-4aa1-9ef5-cc4151ca68be\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865875     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnck2\" (UniqueName: \"kubernetes.io/projected/819ad1c3-65e1-4aa1-9ef5-cc4151ca68be-kube-api-access-gnck2\") pod \"kubernetes-dashboard-855c9754f9-q4gsc\" (UID: \"819ad1c3-65e1-4aa1-9ef5-cc4151ca68be\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc"
	Nov 08 10:17:23 no-preload-872727 kubelet[769]: I1108 10:17:23.282568     769 scope.go:117] "RemoveContainer" containerID="5b65167036a4ac2cd5312ccfe00f3c71399270c5039b0b92e76ca62ba7c31842"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: I1108 10:17:24.292252     769 scope.go:117] "RemoveContainer" containerID="5b65167036a4ac2cd5312ccfe00f3c71399270c5039b0b92e76ca62ba7c31842"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: I1108 10:17:24.292497     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: E1108 10:17:24.292681     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:25 no-preload-872727 kubelet[769]: I1108 10:17:25.296963     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:25 no-preload-872727 kubelet[769]: E1108 10:17:25.299943     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:29 no-preload-872727 kubelet[769]: I1108 10:17:29.033299     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:29 no-preload-872727 kubelet[769]: E1108 10:17:29.033512     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.179753     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.338438     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.338739     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: E1108 10:17:41.338890     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.358211     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc" podStartSLOduration=13.664260595 podStartE2EDuration="23.35819417s" podCreationTimestamp="2025-11-08 10:17:18 +0000 UTC" firstStartedPulling="2025-11-08 10:17:19.0948774 +0000 UTC m=+11.198752800" lastFinishedPulling="2025-11-08 10:17:28.788810976 +0000 UTC m=+20.892686375" observedRunningTime="2025-11-08 10:17:29.329609047 +0000 UTC m=+21.433484463" watchObservedRunningTime="2025-11-08 10:17:41.35819417 +0000 UTC m=+33.462069570"
	Nov 08 10:17:46 no-preload-872727 kubelet[769]: I1108 10:17:46.352818     769 scope.go:117] "RemoveContainer" containerID="a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220"
	Nov 08 10:17:49 no-preload-872727 kubelet[769]: I1108 10:17:49.033364     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:17:49 no-preload-872727 kubelet[769]: E1108 10:17:49.033540     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:18:01 no-preload-872727 kubelet[769]: I1108 10:18:01.180076     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:18:01 no-preload-872727 kubelet[769]: E1108 10:18:01.180257     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:18:01 no-preload-872727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:18:01 no-preload-872727 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:18:01 no-preload-872727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [52b5032212e00b125297bb977a888b5c53005489413ae8da2f80e4d3ee09b028] <==
	2025/11/08 10:17:28 Using namespace: kubernetes-dashboard
	2025/11/08 10:17:28 Using in-cluster config to connect to apiserver
	2025/11/08 10:17:28 Using secret token for csrf signing
	2025/11/08 10:17:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:17:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:17:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:17:28 Generating JWE encryption key
	2025/11/08 10:17:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:17:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:17:29 Initializing JWE encryption key from synchronized object
	2025/11/08 10:17:29 Creating in-cluster Sidecar client
	2025/11/08 10:17:29 Serving insecurely on HTTP port: 9090
	2025/11/08 10:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:17:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:17:28 Starting overwatch
	
	
	==> storage-provisioner [a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220] <==
	I1108 10:17:16.014624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:17:46.016181       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95] <==
	I1108 10:17:46.408065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:17:46.421548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:17:46.421732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:17:46.424662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:49.880546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:54.141260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:57.739657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:00.794438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.816116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.821304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:03.821470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:18:03.821623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6!
	I1108 10:18:03.822558       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e3bc8d2-5847-4f52-bedc-77da0e14b7f9", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6 became leader
	W1108 10:18:03.827213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.847511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:03.923237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-872727 -n no-preload-872727: exit status 2 (450.052964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-872727 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-872727
helpers_test.go:243: (dbg) docker inspect no-preload-872727:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	        "Created": "2025-11-08T10:15:21.269248431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:17:01.162937769Z",
	            "FinishedAt": "2025-11-08T10:16:59.998032434Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/hosts",
	        "LogPath": "/var/lib/docker/containers/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662/a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662-json.log",
	        "Name": "/no-preload-872727",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-872727:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-872727",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3d97acc35095d6356de5f3342e187399c9950a3e04ee7cacf0981888596d662",
	                "LowerDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6322f322157154ab2f58bab10eb169ae5720068fd917dea0ea91dddd38c54c96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-872727",
	                "Source": "/var/lib/docker/volumes/no-preload-872727/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-872727",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-872727",
	                "name.minikube.sigs.k8s.io": "no-preload-872727",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "455bc2abdfea372fbabce33f72e1c03326d7f813db96590e6af5c5361ab4b7e1",
	            "SandboxKey": "/var/run/docker/netns/455bc2abdfea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-872727": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:c5:ca:26:27:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d3d5cc4896cbfd283044d2bbac6b28bc7f91508576b47e0339f3f688dde7413",
	                    "EndpointID": "ab7090c93bf7d203bd551c93dc3b8d9a074498f1988a18659a8a970f389e4d7d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-872727",
	                        "a3d97acc3509"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727: exit status 2 (506.88827ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-872727 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-872727 logs -n 25: (1.887630149s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-916440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-916440    │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ delete  │ -p cert-options-916440                                                                                                                                                                                                                        │ cert-options-916440    │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:12 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:12 UTC │ 08 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-332573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │                     │
	│ stop    │ -p old-k8s-version-332573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:13 UTC │ 08 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489 │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645     │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727      │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:17:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:17:42.483940  488441 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:17:42.484297  488441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:42.484331  488441 out.go:374] Setting ErrFile to fd 2...
	I1108 10:17:42.484352  488441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:17:42.484749  488441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:17:42.485203  488441 out.go:368] Setting JSON to false
	I1108 10:17:42.486209  488441 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10812,"bootTime":1762586251,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:17:42.486311  488441 start.go:143] virtualization:  
	I1108 10:17:42.491399  488441 out.go:179] * [embed-certs-606645] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:17:42.494650  488441 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:17:42.494724  488441 notify.go:221] Checking for updates...
	I1108 10:17:42.500683  488441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:17:42.503701  488441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:42.506682  488441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:17:42.509910  488441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:17:42.512889  488441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:17:42.516309  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:42.516944  488441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:17:42.550239  488441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:17:42.550354  488441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:42.615350  488441 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:42.605243378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:42.615541  488441 docker.go:319] overlay module found
	I1108 10:17:42.618805  488441 out.go:179] * Using the docker driver based on existing profile
	I1108 10:17:42.621805  488441 start.go:309] selected driver: docker
	I1108 10:17:42.621832  488441 start.go:930] validating driver "docker" against &{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:42.621938  488441 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:17:42.622742  488441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:17:42.685996  488441 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:17:42.67659548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:17:42.686361  488441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:42.686421  488441 cni.go:84] Creating CNI manager for ""
	I1108 10:17:42.686487  488441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:42.686534  488441 start.go:353] cluster config:
	{Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:42.691422  488441 out.go:179] * Starting "embed-certs-606645" primary control-plane node in "embed-certs-606645" cluster
	I1108 10:17:42.694451  488441 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:17:42.697691  488441 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:17:42.700640  488441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:42.700715  488441 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:17:42.700727  488441 cache.go:59] Caching tarball of preloaded images
	I1108 10:17:42.700729  488441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:17:42.700819  488441 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:17:42.700829  488441 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:17:42.700966  488441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:17:42.725999  488441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:17:42.726025  488441 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:17:42.726038  488441 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:17:42.726061  488441 start.go:360] acquireMachinesLock for embed-certs-606645: {Name:mke419d0c52d844252caf31cfbe575cf42b647de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:17:42.726121  488441 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "embed-certs-606645"
	I1108 10:17:42.726147  488441 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:17:42.726156  488441 fix.go:54] fixHost starting: 
	I1108 10:17:42.726418  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:42.743423  488441 fix.go:112] recreateIfNeeded on embed-certs-606645: state=Stopped err=<nil>
	W1108 10:17:42.743456  488441 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:17:42.198433  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	W1108 10:17:44.696649  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	I1108 10:17:42.746855  488441 out.go:252] * Restarting existing docker container for "embed-certs-606645" ...
	I1108 10:17:42.746975  488441 cli_runner.go:164] Run: docker start embed-certs-606645
	I1108 10:17:42.994066  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:43.027023  488441 kic.go:430] container "embed-certs-606645" state is running.
	I1108 10:17:43.027434  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:43.054091  488441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/config.json ...
	I1108 10:17:43.054315  488441 machine.go:94] provisionDockerMachine start ...
	I1108 10:17:43.054378  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:43.077964  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:43.078300  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:43.078315  488441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:17:43.079805  488441 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58046->127.0.0.1:33443: read: connection reset by peer
	I1108 10:17:46.237159  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:17:46.237184  488441 ubuntu.go:182] provisioning hostname "embed-certs-606645"
	I1108 10:17:46.237269  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.256414  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.256733  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.256749  488441 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-606645 && echo "embed-certs-606645" | sudo tee /etc/hostname
	I1108 10:17:46.432593  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-606645
	
	I1108 10:17:46.432671  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.451731  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.452092  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.452115  488441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606645/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:17:46.611319  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:17:46.611402  488441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:17:46.611445  488441 ubuntu.go:190] setting up certificates
	I1108 10:17:46.611473  488441 provision.go:84] configureAuth start
	I1108 10:17:46.611600  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:46.628683  488441 provision.go:143] copyHostCerts
	I1108 10:17:46.628761  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:17:46.628776  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:17:46.628852  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:17:46.629052  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:17:46.629059  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:17:46.629093  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:17:46.629159  488441 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:17:46.629176  488441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:17:46.629203  488441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:17:46.629284  488441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606645 san=[127.0.0.1 192.168.76.2 embed-certs-606645 localhost minikube]
	I1108 10:17:46.723100  488441 provision.go:177] copyRemoteCerts
	I1108 10:17:46.723171  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:17:46.723229  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.740438  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:46.849229  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:17:46.871040  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:17:46.890244  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 10:17:46.909552  488441 provision.go:87] duration metric: took 298.04915ms to configureAuth
	I1108 10:17:46.909584  488441 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:17:46.909810  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:46.909923  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:46.927269  488441 main.go:143] libmachine: Using SSH client type: native
	I1108 10:17:46.927598  488441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1108 10:17:46.927618  488441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:17:47.276965  488441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:17:47.276996  488441 machine.go:97] duration metric: took 4.222660782s to provisionDockerMachine
	I1108 10:17:47.277007  488441 start.go:293] postStartSetup for "embed-certs-606645" (driver="docker")
	I1108 10:17:47.277018  488441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:17:47.277076  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:17:47.277129  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.305646  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.437080  488441 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:17:47.440432  488441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:17:47.440460  488441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:17:47.440471  488441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:17:47.440523  488441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:17:47.440602  488441 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:17:47.440706  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:17:47.447871  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:47.466888  488441 start.go:296] duration metric: took 189.865441ms for postStartSetup
	I1108 10:17:47.466980  488441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:17:47.467031  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.483842  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.585820  488441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:17:47.590435  488441 fix.go:56] duration metric: took 4.864272759s for fixHost
	I1108 10:17:47.590469  488441 start.go:83] releasing machines lock for "embed-certs-606645", held for 4.864335661s
	I1108 10:17:47.590538  488441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-606645
	I1108 10:17:47.607320  488441 ssh_runner.go:195] Run: cat /version.json
	I1108 10:17:47.607382  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.607649  488441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:17:47.607718  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:47.627274  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.627875  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:47.732799  488441 ssh_runner.go:195] Run: systemctl --version
	I1108 10:17:47.821822  488441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:17:47.861430  488441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:17:47.865836  488441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:17:47.865908  488441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:17:47.873874  488441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:17:47.873897  488441 start.go:496] detecting cgroup driver to use...
	I1108 10:17:47.873930  488441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:17:47.873974  488441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:17:47.889073  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:17:47.904231  488441 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:17:47.904302  488441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:17:47.920343  488441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:17:47.933601  488441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:17:48.063535  488441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:17:48.252051  488441 docker.go:234] disabling docker service ...
	I1108 10:17:48.252141  488441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:17:48.267648  488441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:17:48.282519  488441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:17:48.408066  488441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:17:48.530362  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:17:48.543011  488441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:17:48.557585  488441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:17:48.557711  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.566675  488441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:17:48.566794  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.575921  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.589361  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.599123  488441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:17:48.608346  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.619402  488441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.628870  488441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:17:48.638135  488441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:17:48.645611  488441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:17:48.653017  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:48.772841  488441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:17:48.913463  488441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:17:48.913578  488441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:17:48.917921  488441 start.go:564] Will wait 60s for crictl version
	I1108 10:17:48.918012  488441 ssh_runner.go:195] Run: which crictl
	I1108 10:17:48.921531  488441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:17:48.946816  488441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:17:48.946976  488441 ssh_runner.go:195] Run: crio --version
	I1108 10:17:48.975970  488441 ssh_runner.go:195] Run: crio --version
	I1108 10:17:49.010602  488441 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 10:17:46.697529  485498 pod_ready.go:104] pod "coredns-66bc5c9577-7xnlf" is not "Ready", error: <nil>
	I1108 10:17:47.696584  485498 pod_ready.go:94] pod "coredns-66bc5c9577-7xnlf" is "Ready"
	I1108 10:17:47.696607  485498 pod_ready.go:86] duration metric: took 31.00512402s for pod "coredns-66bc5c9577-7xnlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.698605  485498 pod_ready.go:83] waiting for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.702337  485498 pod_ready.go:94] pod "etcd-no-preload-872727" is "Ready"
	I1108 10:17:47.702415  485498 pod_ready.go:86] duration metric: took 3.789137ms for pod "etcd-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.704439  485498 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.710084  485498 pod_ready.go:94] pod "kube-apiserver-no-preload-872727" is "Ready"
	I1108 10:17:47.710104  485498 pod_ready.go:86] duration metric: took 5.645628ms for pod "kube-apiserver-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.713183  485498 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:47.895280  485498 pod_ready.go:94] pod "kube-controller-manager-no-preload-872727" is "Ready"
	I1108 10:17:47.895306  485498 pod_ready.go:86] duration metric: took 182.031067ms for pod "kube-controller-manager-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.096716  485498 pod_ready.go:83] waiting for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.495484  485498 pod_ready.go:94] pod "kube-proxy-tl7z2" is "Ready"
	I1108 10:17:48.495507  485498 pod_ready.go:86] duration metric: took 398.762747ms for pod "kube-proxy-tl7z2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:48.694762  485498 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:49.095921  485498 pod_ready.go:94] pod "kube-scheduler-no-preload-872727" is "Ready"
	I1108 10:17:49.095944  485498 pod_ready.go:86] duration metric: took 401.153259ms for pod "kube-scheduler-no-preload-872727" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:17:49.095956  485498 pod_ready.go:40] duration metric: took 32.480689424s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:49.174662  485498 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:17:49.177609  485498 out.go:179] * Done! kubectl is now configured to use "no-preload-872727" cluster and "default" namespace by default
	I1108 10:17:49.013430  488441 cli_runner.go:164] Run: docker network inspect embed-certs-606645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:17:49.030494  488441 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:17:49.039697  488441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:49.049879  488441 kubeadm.go:884] updating cluster {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:17:49.049985  488441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:17:49.050037  488441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:17:49.090547  488441 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:17:49.090568  488441 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:17:49.090621  488441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:17:49.132497  488441 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:17:49.132519  488441 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:17:49.132527  488441 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:17:49.132627  488441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-606645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:17:49.132705  488441 ssh_runner.go:195] Run: crio config
	I1108 10:17:49.273763  488441 cni.go:84] Creating CNI manager for ""
	I1108 10:17:49.273787  488441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:17:49.273808  488441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:17:49.273832  488441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606645 NodeName:embed-certs-606645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:17:49.273963  488441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606645"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:17:49.274039  488441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:17:49.290059  488441 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:17:49.290117  488441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:17:49.299182  488441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:17:49.322319  488441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:17:49.340298  488441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:17:49.359038  488441 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:17:49.365662  488441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:17:49.376590  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:49.512227  488441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:49.533047  488441 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645 for IP: 192.168.76.2
	I1108 10:17:49.533068  488441 certs.go:195] generating shared ca certs ...
	I1108 10:17:49.533084  488441 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:49.533218  488441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:17:49.533274  488441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:17:49.533289  488441 certs.go:257] generating profile certs ...
	I1108 10:17:49.533382  488441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/client.key
	I1108 10:17:49.533446  488441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key.9e91513e
	I1108 10:17:49.533488  488441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key
	I1108 10:17:49.533605  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:17:49.533639  488441 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:17:49.533652  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:17:49.533685  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:17:49.533712  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:17:49.533738  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:17:49.533782  488441 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:17:49.534390  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:17:49.558477  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:17:49.583644  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:17:49.620562  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:17:49.657822  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:17:49.713990  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:17:49.731993  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:17:49.750569  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/embed-certs-606645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:17:49.770044  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:17:49.789524  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:17:49.808655  488441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:17:49.827777  488441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:17:49.842236  488441 ssh_runner.go:195] Run: openssl version
	I1108 10:17:49.853933  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:17:49.864705  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.870268  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.870362  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:17:49.920517  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:17:49.929730  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:17:49.939331  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.943155  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.943243  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:17:49.986988  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:17:49.996257  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:17:50.007577  488441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.012806  488441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.012881  488441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:17:50.057773  488441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:17:50.066295  488441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:17:50.070733  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:17:50.113678  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:17:50.155274  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:17:50.197381  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:17:50.238771  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:17:50.280557  488441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:17:50.328175  488441 kubeadm.go:401] StartCluster: {Name:embed-certs-606645 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-606645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:17:50.328274  488441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:17:50.328334  488441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:17:50.404851  488441 cri.go:89] found id: ""
	I1108 10:17:50.404985  488441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:17:50.419026  488441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:17:50.419059  488441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:17:50.419127  488441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:17:50.434661  488441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:17:50.435320  488441 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-606645" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:50.435587  488441 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-606645" cluster setting kubeconfig missing "embed-certs-606645" context setting]
	I1108 10:17:50.436152  488441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.440447  488441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:17:50.455296  488441 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:17:50.455370  488441 kubeadm.go:602] duration metric: took 36.296202ms to restartPrimaryControlPlane
	I1108 10:17:50.455400  488441 kubeadm.go:403] duration metric: took 127.233818ms to StartCluster
	I1108 10:17:50.455451  488441 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.455538  488441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:17:50.457648  488441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:17:50.458778  488441 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:17:50.459855  488441 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:17:50.459905  488441 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:17:50.460032  488441 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-606645"
	I1108 10:17:50.460044  488441 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-606645"
	W1108 10:17:50.460050  488441 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:17:50.460084  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.460599  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.463498  488441 addons.go:70] Setting default-storageclass=true in profile "embed-certs-606645"
	I1108 10:17:50.463528  488441 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606645"
	I1108 10:17:50.467183  488441 addons.go:70] Setting dashboard=true in profile "embed-certs-606645"
	I1108 10:17:50.467216  488441 addons.go:239] Setting addon dashboard=true in "embed-certs-606645"
	W1108 10:17:50.467224  488441 addons.go:248] addon dashboard should already be in state true
	I1108 10:17:50.467256  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.467709  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.468301  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.469387  488441 out.go:179] * Verifying Kubernetes components...
	I1108 10:17:50.480495  488441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:17:50.554717  488441 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:17:50.554822  488441 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:17:50.557658  488441 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:50.557679  488441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:17:50.557751  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.565047  488441 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:17:50.565968  488441 addons.go:239] Setting addon default-storageclass=true in "embed-certs-606645"
	W1108 10:17:50.565989  488441 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:17:50.566015  488441 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:17:50.566491  488441 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:17:50.568588  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:17:50.568607  488441 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:17:50.568672  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.616416  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.629895  488441 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:50.629920  488441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:17:50.629980  488441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:17:50.645233  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.662405  488441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:17:50.842499  488441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:17:50.889504  488441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:17:50.936985  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:17:50.940803  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:17:51.027499  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:17:51.027570  488441 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:17:51.114295  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:17:51.114373  488441 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:17:51.198940  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:17:51.199025  488441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:17:51.240240  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:17:51.240313  488441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:17:51.267623  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:17:51.267723  488441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:17:51.290489  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:17:51.290563  488441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:17:51.316685  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:17:51.316756  488441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:17:51.339524  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:17:51.339595  488441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:17:51.363739  488441 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:17:51.363808  488441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:17:51.383091  488441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:17:54.699109  488441 node_ready.go:49] node "embed-certs-606645" is "Ready"
	I1108 10:17:54.699136  488441 node_ready.go:38] duration metric: took 3.809550776s for node "embed-certs-606645" to be "Ready" ...
	I1108 10:17:54.699151  488441 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:17:54.699210  488441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:17:55.824074  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.887015389s)
	I1108 10:17:55.824140  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.883276115s)
	I1108 10:17:55.824512  488441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.441339442s)
	I1108 10:17:55.825234  488441 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.126009571s)
	I1108 10:17:55.825262  488441 api_server.go:72] duration metric: took 5.366399467s to wait for apiserver process to appear ...
	I1108 10:17:55.825269  488441 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:17:55.825282  488441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:17:55.827531  488441 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-606645 addons enable metrics-server
	
	I1108 10:17:55.839107  488441 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:17:55.839189  488441 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:17:55.851449  488441 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:17:55.854350  488441 addons.go:515] duration metric: took 5.394427496s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:17:56.326135  488441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:17:56.336060  488441 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:17:56.337176  488441 api_server.go:141] control plane version: v1.34.1
	I1108 10:17:56.337210  488441 api_server.go:131] duration metric: took 511.934204ms to wait for apiserver health ...
	I1108 10:17:56.337249  488441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:17:56.341236  488441 system_pods.go:59] 8 kube-system pods found
	I1108 10:17:56.341280  488441 system_pods.go:61] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:56.341292  488441 system_pods.go:61] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:56.341312  488441 system_pods.go:61] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:17:56.341320  488441 system_pods.go:61] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:56.341332  488441 system_pods.go:61] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:56.341339  488441 system_pods.go:61] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:17:56.341346  488441 system_pods.go:61] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:56.341360  488441 system_pods.go:61] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:56.341370  488441 system_pods.go:74] duration metric: took 4.110748ms to wait for pod list to return data ...
	I1108 10:17:56.341382  488441 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:17:56.344026  488441 default_sa.go:45] found service account: "default"
	I1108 10:17:56.344050  488441 default_sa.go:55] duration metric: took 2.660676ms for default service account to be created ...
	I1108 10:17:56.344060  488441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:17:56.347483  488441 system_pods.go:86] 8 kube-system pods found
	I1108 10:17:56.347517  488441 system_pods.go:89] "coredns-66bc5c9577-t2frl" [e22d81d9-6568-4569-908f-cefa38ef9b76] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:17:56.347527  488441 system_pods.go:89] "etcd-embed-certs-606645" [38fe8240-e9fc-4f51-a081-491490c73119] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:17:56.347537  488441 system_pods.go:89] "kindnet-tb5h7" [693ec6c4-791c-4411-a276-f4bfbdfb845e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:17:56.347546  488441 system_pods.go:89] "kube-apiserver-embed-certs-606645" [f40b54f2-7c30-45ae-b914-881edc3f3afe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:17:56.347553  488441 system_pods.go:89] "kube-controller-manager-embed-certs-606645" [2d4b93ff-dfad-47c6-bc9b-ea156cc3c186] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:17:56.347560  488441 system_pods.go:89] "kube-proxy-tvxrb" [0ac67495-1d1e-481c-bf20-c9ccf1d66d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:17:56.347579  488441 system_pods.go:89] "kube-scheduler-embed-certs-606645" [8c26f963-b116-494f-b3c9-898f96ef6e94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:17:56.347590  488441 system_pods.go:89] "storage-provisioner" [f82be00b-3c38-44dc-afef-f1e2434ae470] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:17:56.347600  488441 system_pods.go:126] duration metric: took 3.533922ms to wait for k8s-apps to be running ...
	I1108 10:17:56.347613  488441 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:17:56.347671  488441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:17:56.361028  488441 system_svc.go:56] duration metric: took 13.405747ms WaitForService to wait for kubelet
	I1108 10:17:56.361103  488441 kubeadm.go:587] duration metric: took 5.902238119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:17:56.361135  488441 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:17:56.365614  488441 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:17:56.365694  488441 node_conditions.go:123] node cpu capacity is 2
	I1108 10:17:56.365721  488441 node_conditions.go:105] duration metric: took 4.548758ms to run NodePressure ...
	I1108 10:17:56.365746  488441 start.go:242] waiting for startup goroutines ...
	I1108 10:17:56.365790  488441 start.go:247] waiting for cluster config update ...
	I1108 10:17:56.365815  488441 start.go:256] writing updated cluster config ...
	I1108 10:17:56.366134  488441 ssh_runner.go:195] Run: rm -f paused
	I1108 10:17:56.370147  488441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:17:56.373779  488441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:17:58.379082  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:00.382304  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:17:41 no-preload-872727 crio[651]: time="2025-11-08T10:17:41.356157254Z" level=info msg="Removed container d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd/dashboard-metrics-scraper" id=7433b73b-d67d-471c-8b21-e10e166fa3e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:17:46 no-preload-872727 conmon[1146]: conmon a101294ff5a06d18c6fe <ninfo>: container 1149 exited with status 1
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.353669222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=80a24fac-0a84-4428-aa15-8eaefe6bd5b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.3545751Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f52c4f14-305d-483a-ad98-6eb16add6c92 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.355439256Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=351e58e6-7802-4bd7-a8f6-8d22c24757d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.355530776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364259304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364546034Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/caff05c8e6d03e6f4193c468106d68f8c820f068628a13e9c5a7159dfd11d367/merged/etc/passwd: no such file or directory"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364634371Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/caff05c8e6d03e6f4193c468106d68f8c820f068628a13e9c5a7159dfd11d367/merged/etc/group: no such file or directory"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.364975125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.384785415Z" level=info msg="Created container f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95: kube-system/storage-provisioner/storage-provisioner" id=351e58e6-7802-4bd7-a8f6-8d22c24757d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.38554296Z" level=info msg="Starting container: f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95" id=6a648da4-2a99-43da-b16e-8d515bbf2ae9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:17:46 no-preload-872727 crio[651]: time="2025-11-08T10:17:46.38783811Z" level=info msg="Started container" PID=1643 containerID=f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95 description=kube-system/storage-provisioner/storage-provisioner id=6a648da4-2a99-43da-b16e-8d515bbf2ae9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=673b694be03b780ed8f59eaa4b9ff20ebd6d2f8bb155f8ee50451fe956cbfc75
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.215385527Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219798357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219832442Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.219855547Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223781744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223815008Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.223838655Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227908525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227943241Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.227969465Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.239394058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:17:56 no-preload-872727 crio[651]: time="2025-11-08T10:17:56.239588373Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f334ef93153e7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   673b694be03b7       storage-provisioner                          kube-system
	8e40016b4466d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   d5a08333b2ac4       dashboard-metrics-scraper-6ffb444bf9-gqtzd   kubernetes-dashboard
	52b5032212e00       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   b9359a5394efb       kubernetes-dashboard-855c9754f9-q4gsc        kubernetes-dashboard
	9488d464568f4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   c38fde1107026       busybox                                      default
	d8a6e6955b170       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   88031f1d74b8f       coredns-66bc5c9577-7xnlf                     kube-system
	a101294ff5a06       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   673b694be03b7       storage-provisioner                          kube-system
	4a7f5f22e728f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   4fa8d0dfda9d8       kube-proxy-tl7z2                             kube-system
	ec2b9322a1de4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   a51bc3803df47       kindnet-lld9n                                kube-system
	07e3896f175cb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   f8cf5c99088c5       kube-controller-manager-no-preload-872727    kube-system
	c2aabe05d680c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   a2018b927e18a       kube-scheduler-no-preload-872727             kube-system
	25640dc0fff19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   63554448201b6       etcd-no-preload-872727                       kube-system
	af4ee873dcf3a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   77d208130cf09       kube-apiserver-no-preload-872727             kube-system
	
	
	==> coredns [d8a6e6955b1700e728b0506a7d873f21785124dd5f3c6ce00ed73c7412fb24e7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60674 - 33592 "HINFO IN 4862426244402300975.1767909836293083321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030885776s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-872727
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-872727
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-872727
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-872727
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:17:55 +0000   Sat, 08 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-872727
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                f5ae8ced-8225-4268-ba4a-f32dd64e1a62
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-7xnlf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-872727                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-lld9n                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-872727              250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-872727     200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-tl7z2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-872727              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gqtzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q4gsc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-872727 event: Registered Node no-preload-872727 in Controller
	  Normal   NodeReady                95s                  kubelet          Node no-preload-872727 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-872727 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-872727 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-872727 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-872727 event: Registered Node no-preload-872727 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [25640dc0fff195b37a317ec2cae1b3fac7db485a4e609e296a62be1978b92dec] <==
	{"level":"warn","ts":"2025-11-08T10:17:11.971543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:11.988634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.033515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.061038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.134596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.136553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.169829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.183279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.206770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.245808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.256767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.290462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.321400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.342564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.357790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.383992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.399460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.416854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.442418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.502715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.523598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.554506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.615090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.659034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:12.737720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37102","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:07 up  3:00,  0 user,  load average: 5.28, 4.09, 2.91
	Linux no-preload-872727 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec2b9322a1de4af91aed5a8283aa5006a918d9ec578e981fe89fc2d4684ee922] <==
	I1108 10:17:16.017986       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:17:16.018423       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:17:16.018584       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:17:16.018628       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:17:16.018669       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:17:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:17:16.214669       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:17:16.214766       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:17:16.214800       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:17:16.215359       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:17:46.216846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:17:46.216880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:17:46.217074       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:17:46.218054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1108 10:17:47.715806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:17:47.715920       1 metrics.go:72] Registering metrics
	I1108 10:17:47.716004       1 controller.go:711] "Syncing nftables rules"
	I1108 10:17:56.215031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:17:56.215085       1 main.go:301] handling current node
	I1108 10:18:06.214435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:18:06.214465       1 main.go:301] handling current node
	
	
	==> kube-apiserver [af4ee873dcf3a9f5542182a40c089c34fbb16da34cb89643f859ca8c741c206b] <==
	I1108 10:17:14.165931       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:17:14.165953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:17:14.178896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:17:14.179158       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:17:14.179437       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:17:14.179564       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:17:14.179579       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:17:14.179761       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:17:14.179791       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:17:14.179798       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:17:14.179803       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:17:14.180349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:17:14.216094       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1108 10:17:14.247121       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:17:14.571480       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:17:15.154346       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:17:15.199484       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:17:15.404644       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:17:15.570028       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:17:15.636817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:17:15.933266       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.255.172"}
	I1108 10:17:15.970444       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.165.146"}
	I1108 10:17:18.235079       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:17:18.534989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:17:18.636747       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [07e3896f175cbc700250d85b4144c1a9d57dd773a77aaa820c8f3638851a6914] <==
	I1108 10:17:18.122076       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:17:18.125313       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:17:18.128011       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:17:18.128767       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:17:18.128841       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:17:18.128852       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:17:18.128864       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:17:18.129242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:17:18.129425       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:17:18.130681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:17:18.130762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:17:18.130805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:17:18.132076       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:17:18.132121       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:17:18.132151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:17:18.132888       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:17:18.167988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:17:18.172425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:17:18.176398       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:17:18.176568       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:17:18.184083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-872727"
	I1108 10:17:18.184780       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:17:18.184826       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:17:18.186189       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:17:18.187207       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [4a7f5f22e728f39f9a0f36bc691d475caae9deb5d2b1bc5741b93a7fb1a4320e] <==
	I1108 10:17:16.055459       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:17:16.202717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:17:16.304995       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:17:16.305797       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:17:16.305943       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:17:16.356334       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:17:16.356467       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:17:16.362778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:17:16.363147       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:17:16.363172       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:16.364302       1 config.go:200] "Starting service config controller"
	I1108 10:17:16.364366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:17:16.371858       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:17:16.371962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:17:16.372012       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:17:16.372039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:17:16.375745       1 config.go:309] "Starting node config controller"
	I1108 10:17:16.377014       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:17:16.377086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:17:16.465292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:17:16.472645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:17:16.472758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c2aabe05d680cabcaa20b0665b445667d7738cbdb6f133edcb0233dc3bbc9d6b] <==
	I1108 10:17:13.005794       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:17:16.161937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:17:16.162039       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:16.169314       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:17:16.169437       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:17:16.169516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.169549       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.169598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:17:16.169649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:17:16.169812       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:17:16.169942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:17:16.271147       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:17:16.271596       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:16.277056       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865757     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf98k\" (UniqueName: \"kubernetes.io/projected/9f8ea253-398d-4f7a-abbc-90ac9d766530-kube-api-access-mf98k\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqtzd\" (UID: \"9f8ea253-398d-4f7a-abbc-90ac9d766530\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865823     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f8ea253-398d-4f7a-abbc-90ac9d766530-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqtzd\" (UID: \"9f8ea253-398d-4f7a-abbc-90ac9d766530\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865848     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/819ad1c3-65e1-4aa1-9ef5-cc4151ca68be-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-q4gsc\" (UID: \"819ad1c3-65e1-4aa1-9ef5-cc4151ca68be\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc"
	Nov 08 10:17:18 no-preload-872727 kubelet[769]: I1108 10:17:18.865875     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnck2\" (UniqueName: \"kubernetes.io/projected/819ad1c3-65e1-4aa1-9ef5-cc4151ca68be-kube-api-access-gnck2\") pod \"kubernetes-dashboard-855c9754f9-q4gsc\" (UID: \"819ad1c3-65e1-4aa1-9ef5-cc4151ca68be\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc"
	Nov 08 10:17:23 no-preload-872727 kubelet[769]: I1108 10:17:23.282568     769 scope.go:117] "RemoveContainer" containerID="5b65167036a4ac2cd5312ccfe00f3c71399270c5039b0b92e76ca62ba7c31842"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: I1108 10:17:24.292252     769 scope.go:117] "RemoveContainer" containerID="5b65167036a4ac2cd5312ccfe00f3c71399270c5039b0b92e76ca62ba7c31842"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: I1108 10:17:24.292497     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:24 no-preload-872727 kubelet[769]: E1108 10:17:24.292681     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:25 no-preload-872727 kubelet[769]: I1108 10:17:25.296963     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:25 no-preload-872727 kubelet[769]: E1108 10:17:25.299943     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:29 no-preload-872727 kubelet[769]: I1108 10:17:29.033299     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:29 no-preload-872727 kubelet[769]: E1108 10:17:29.033512     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.179753     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.338438     769 scope.go:117] "RemoveContainer" containerID="d782ae7e81046de2c5ed2eb4116811f3e883fc2204886f4b5a69543d7a4a3d2c"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.338739     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: E1108 10:17:41.338890     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:17:41 no-preload-872727 kubelet[769]: I1108 10:17:41.358211     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q4gsc" podStartSLOduration=13.664260595 podStartE2EDuration="23.35819417s" podCreationTimestamp="2025-11-08 10:17:18 +0000 UTC" firstStartedPulling="2025-11-08 10:17:19.0948774 +0000 UTC m=+11.198752800" lastFinishedPulling="2025-11-08 10:17:28.788810976 +0000 UTC m=+20.892686375" observedRunningTime="2025-11-08 10:17:29.329609047 +0000 UTC m=+21.433484463" watchObservedRunningTime="2025-11-08 10:17:41.35819417 +0000 UTC m=+33.462069570"
	Nov 08 10:17:46 no-preload-872727 kubelet[769]: I1108 10:17:46.352818     769 scope.go:117] "RemoveContainer" containerID="a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220"
	Nov 08 10:17:49 no-preload-872727 kubelet[769]: I1108 10:17:49.033364     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:17:49 no-preload-872727 kubelet[769]: E1108 10:17:49.033540     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:18:01 no-preload-872727 kubelet[769]: I1108 10:18:01.180076     769 scope.go:117] "RemoveContainer" containerID="8e40016b4466d6aace0821ee2cc863a4105f0fcf00bac0d104609a63410ee85b"
	Nov 08 10:18:01 no-preload-872727 kubelet[769]: E1108 10:18:01.180257     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqtzd_kubernetes-dashboard(9f8ea253-398d-4f7a-abbc-90ac9d766530)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqtzd" podUID="9f8ea253-398d-4f7a-abbc-90ac9d766530"
	Nov 08 10:18:01 no-preload-872727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:18:01 no-preload-872727 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:18:01 no-preload-872727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [52b5032212e00b125297bb977a888b5c53005489413ae8da2f80e4d3ee09b028] <==
	2025/11/08 10:17:28 Using namespace: kubernetes-dashboard
	2025/11/08 10:17:28 Using in-cluster config to connect to apiserver
	2025/11/08 10:17:28 Using secret token for csrf signing
	2025/11/08 10:17:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:17:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:17:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:17:28 Generating JWE encryption key
	2025/11/08 10:17:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:17:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:17:29 Initializing JWE encryption key from synchronized object
	2025/11/08 10:17:29 Creating in-cluster Sidecar client
	2025/11/08 10:17:29 Serving insecurely on HTTP port: 9090
	2025/11/08 10:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:17:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:17:28 Starting overwatch
	
	
	==> storage-provisioner [a101294ff5a06d18c6fefecf32199f4ab4989e79bb47341ea61784dab8608220] <==
	I1108 10:17:16.014624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:17:46.016181       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f334ef93153e73c62a8c3914597bfe56b81ac0f41e58baa990518f8ade426f95] <==
	I1108 10:17:46.408065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:17:46.421548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:17:46.421732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:17:46.424662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:49.880546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:54.141260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:17:57.739657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:00.794438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.816116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.821304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:03.821470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:18:03.821623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6!
	I1108 10:18:03.822558       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e3bc8d2-5847-4f52-bedc-77da0e14b7f9", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6 became leader
	W1108 10:18:03.827213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:03.847511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:03.923237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-872727_985f850f-b4d0-464e-8c8c-487632a580f6!
	W1108 10:18:05.851835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:05.865497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:07.869495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:07.875425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-872727 -n no-preload-872727
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-872727 -n no-preload-872727: exit status 2 (557.770234ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-872727 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-606645 --alsologtostderr -v=1
E1108 10:18:50.427432  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-606645 --alsologtostderr -v=1: exit status 80 (2.207808825s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-606645 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:18:48.335306  494365 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:18:48.335510  494365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:48.335524  494365 out.go:374] Setting ErrFile to fd 2...
	I1108 10:18:48.335529  494365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:48.335809  494365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:18:48.336114  494365 out.go:368] Setting JSON to false
	I1108 10:18:48.336143  494365 mustload.go:66] Loading cluster: embed-certs-606645
	I1108 10:18:48.336578  494365 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:48.337138  494365 cli_runner.go:164] Run: docker container inspect embed-certs-606645 --format={{.State.Status}}
	I1108 10:18:48.364671  494365 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:18:48.365017  494365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:48.463569  494365 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:18:48.450108919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:48.464212  494365 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-606645 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:18:48.467743  494365 out.go:179] * Pausing node embed-certs-606645 ... 
	I1108 10:18:48.471584  494365 host.go:66] Checking if "embed-certs-606645" exists ...
	I1108 10:18:48.471926  494365 ssh_runner.go:195] Run: systemctl --version
	I1108 10:18:48.471992  494365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-606645
	I1108 10:18:48.499340  494365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/embed-certs-606645/id_rsa Username:docker}
	I1108 10:18:48.631189  494365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:48.667648  494365 pause.go:52] kubelet running: true
	I1108 10:18:48.667716  494365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:49.037204  494365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:49.037284  494365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:49.167162  494365 cri.go:89] found id: "c20664f059121df02ea619e301151ea7513e1dd1eb20d1419ce1a514d6ca58da"
	I1108 10:18:49.167233  494365 cri.go:89] found id: "042edc78a1ba06a2c62964d4f6528d68d583b2b7f403c53d71c4fee80cc08052"
	I1108 10:18:49.167252  494365 cri.go:89] found id: "442e9f54ea9d773c0c532faed6983236644cf9b2a7b140f49dbc444185e223a1"
	I1108 10:18:49.167272  494365 cri.go:89] found id: "e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028"
	I1108 10:18:49.167307  494365 cri.go:89] found id: "391f1b0171025f124806525a7d27c429a750dc65ffcdaded79fc59e096061d09"
	I1108 10:18:49.167329  494365 cri.go:89] found id: "5006a51562d78f245738f517395a022d4af9f17acc61f08716bf0611005b63d5"
	I1108 10:18:49.167346  494365 cri.go:89] found id: "2298a2b9d3c1ef0e53019c3afbbfde3d06e6f3fe8557c487e2ea43cb7b855e00"
	I1108 10:18:49.167363  494365 cri.go:89] found id: "08e54b19799530c7e6d595805299e60c4e547af1744c8361f316134cbbe2a926"
	I1108 10:18:49.167394  494365 cri.go:89] found id: "c8f3c7bba121ffe6cb77869768c5c4e9a6be9275e646d287e9e1c92fbad9874a"
	I1108 10:18:49.167425  494365 cri.go:89] found id: "71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	I1108 10:18:49.167443  494365 cri.go:89] found id: "01bb45ef7ee363146702b3f88b829650308e0f0fbd58f03c70b08a1236b6a4e3"
	I1108 10:18:49.167476  494365 cri.go:89] found id: ""
	I1108 10:18:49.167570  494365 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:49.182484  494365 retry.go:31] will retry after 127.494952ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:49Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:18:49.310868  494365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:49.328588  494365 pause.go:52] kubelet running: false
	I1108 10:18:49.328700  494365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:49.595791  494365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:49.595952  494365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:49.689628  494365 cri.go:89] found id: "c20664f059121df02ea619e301151ea7513e1dd1eb20d1419ce1a514d6ca58da"
	I1108 10:18:49.689699  494365 cri.go:89] found id: "042edc78a1ba06a2c62964d4f6528d68d583b2b7f403c53d71c4fee80cc08052"
	I1108 10:18:49.689733  494365 cri.go:89] found id: "442e9f54ea9d773c0c532faed6983236644cf9b2a7b140f49dbc444185e223a1"
	I1108 10:18:49.689758  494365 cri.go:89] found id: "e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028"
	I1108 10:18:49.689785  494365 cri.go:89] found id: "391f1b0171025f124806525a7d27c429a750dc65ffcdaded79fc59e096061d09"
	I1108 10:18:49.689816  494365 cri.go:89] found id: "5006a51562d78f245738f517395a022d4af9f17acc61f08716bf0611005b63d5"
	I1108 10:18:49.689838  494365 cri.go:89] found id: "2298a2b9d3c1ef0e53019c3afbbfde3d06e6f3fe8557c487e2ea43cb7b855e00"
	I1108 10:18:49.689857  494365 cri.go:89] found id: "08e54b19799530c7e6d595805299e60c4e547af1744c8361f316134cbbe2a926"
	I1108 10:18:49.689876  494365 cri.go:89] found id: "c8f3c7bba121ffe6cb77869768c5c4e9a6be9275e646d287e9e1c92fbad9874a"
	I1108 10:18:49.689908  494365 cri.go:89] found id: "71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	I1108 10:18:49.689929  494365 cri.go:89] found id: "01bb45ef7ee363146702b3f88b829650308e0f0fbd58f03c70b08a1236b6a4e3"
	I1108 10:18:49.689948  494365 cri.go:89] found id: ""
	I1108 10:18:49.690027  494365 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:49.702222  494365 retry.go:31] will retry after 387.079838ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:49Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:18:50.089627  494365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:18:50.104341  494365 pause.go:52] kubelet running: false
	I1108 10:18:50.104435  494365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:18:50.346167  494365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:18:50.346251  494365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:18:50.427279  494365 cri.go:89] found id: "c20664f059121df02ea619e301151ea7513e1dd1eb20d1419ce1a514d6ca58da"
	I1108 10:18:50.427302  494365 cri.go:89] found id: "042edc78a1ba06a2c62964d4f6528d68d583b2b7f403c53d71c4fee80cc08052"
	I1108 10:18:50.427306  494365 cri.go:89] found id: "442e9f54ea9d773c0c532faed6983236644cf9b2a7b140f49dbc444185e223a1"
	I1108 10:18:50.427314  494365 cri.go:89] found id: "e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028"
	I1108 10:18:50.427318  494365 cri.go:89] found id: "391f1b0171025f124806525a7d27c429a750dc65ffcdaded79fc59e096061d09"
	I1108 10:18:50.427322  494365 cri.go:89] found id: "5006a51562d78f245738f517395a022d4af9f17acc61f08716bf0611005b63d5"
	I1108 10:18:50.427325  494365 cri.go:89] found id: "2298a2b9d3c1ef0e53019c3afbbfde3d06e6f3fe8557c487e2ea43cb7b855e00"
	I1108 10:18:50.427328  494365 cri.go:89] found id: "08e54b19799530c7e6d595805299e60c4e547af1744c8361f316134cbbe2a926"
	I1108 10:18:50.427332  494365 cri.go:89] found id: "c8f3c7bba121ffe6cb77869768c5c4e9a6be9275e646d287e9e1c92fbad9874a"
	I1108 10:18:50.427338  494365 cri.go:89] found id: "71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	I1108 10:18:50.427342  494365 cri.go:89] found id: "01bb45ef7ee363146702b3f88b829650308e0f0fbd58f03c70b08a1236b6a4e3"
	I1108 10:18:50.427345  494365 cri.go:89] found id: ""
	I1108 10:18:50.427391  494365 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:18:50.447155  494365 out.go:203] 
	W1108 10:18:50.450085  494365 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:18:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:18:50.450106  494365 out.go:285] * 
	* 
	W1108 10:18:50.457319  494365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:18:50.461541  494365 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-606645 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-606645
helpers_test.go:243: (dbg) docker inspect embed-certs-606645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	        "Created": "2025-11-08T10:15:58.52351748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:17:42.778905003Z",
	            "FinishedAt": "2025-11-08T10:17:41.909900316Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hostname",
	        "HostsPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hosts",
	        "LogPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431-json.log",
	        "Name": "/embed-certs-606645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-606645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-606645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	                "LowerDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-606645",
	                "Source": "/var/lib/docker/volumes/embed-certs-606645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-606645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-606645",
	                "name.minikube.sigs.k8s.io": "embed-certs-606645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "faa617a4783710891c93eee6236d41e2924c207fb9c6e0563787ef9593a56e76",
	            "SandboxKey": "/var/run/docker/netns/faa617a47837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-606645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:ed:5e:a9:bf:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "805d16fd71681779d29643ac47fdf579dc44f7ad5660dcf2f7e7941c9bae9d2a",
	                    "EndpointID": "ec8c342a4e15387ea3d6ec0a8d4525bf8090bcb688672917e2a7ec51333805bb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-606645",
	                        "d42979033f3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645: exit status 2 (360.819988ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25: (1.390047609s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:18:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:18:13.078273  491995 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:18:13.078454  491995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:13.078485  491995 out.go:374] Setting ErrFile to fd 2...
	I1108 10:18:13.078506  491995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:13.078780  491995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:18:13.079284  491995 out.go:368] Setting JSON to false
	I1108 10:18:13.081035  491995 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10842,"bootTime":1762586251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:18:13.081153  491995 start.go:143] virtualization:  
	I1108 10:18:13.084897  491995 out.go:179] * [default-k8s-diff-port-689864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:18:13.089014  491995 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:18:13.089112  491995 notify.go:221] Checking for updates...
	I1108 10:18:13.095016  491995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:18:13.098639  491995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:18:13.101686  491995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:18:13.104705  491995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:18:13.107544  491995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:18:13.110987  491995 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:13.111093  491995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:18:13.149335  491995 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:18:13.149471  491995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:13.216593  491995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:13.207250123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:13.216702  491995 docker.go:319] overlay module found
	I1108 10:18:13.219743  491995 out.go:179] * Using the docker driver based on user configuration
	I1108 10:18:13.222592  491995 start.go:309] selected driver: docker
	I1108 10:18:13.222614  491995 start.go:930] validating driver "docker" against <nil>
	I1108 10:18:13.222628  491995 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:18:13.223426  491995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:13.279842  491995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:13.270163079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:13.279995  491995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:18:13.280234  491995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:18:13.283232  491995 out.go:179] * Using Docker driver with root privileges
	I1108 10:18:13.286162  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:13.286235  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:13.286249  491995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:18:13.286332  491995 start.go:353] cluster config:
	{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:18:13.291336  491995 out.go:179] * Starting "default-k8s-diff-port-689864" primary control-plane node in "default-k8s-diff-port-689864" cluster
	I1108 10:18:13.294175  491995 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:18:13.297089  491995 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:18:13.299918  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:13.299973  491995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:18:13.299986  491995 cache.go:59] Caching tarball of preloaded images
	I1108 10:18:13.299994  491995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:18:13.300078  491995 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:18:13.300089  491995 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:18:13.300192  491995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json ...
	I1108 10:18:13.300212  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json: {Name:mka75167d9d13eba3d9ad0cbdb5a023e5a95cceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:13.324163  491995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:18:13.324187  491995 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:18:13.324206  491995 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:18:13.324228  491995 start.go:360] acquireMachinesLock for default-k8s-diff-port-689864: {Name:mk8e02949baf85c4a0d930cca199e546b49684a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:18:13.324342  491995 start.go:364] duration metric: took 92.678µs to acquireMachinesLock for "default-k8s-diff-port-689864"
	I1108 10:18:13.324373  491995 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:18:13.324443  491995 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:18:13.381198  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:15.879273  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:13.327728  491995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:18:13.327979  491995 start.go:159] libmachine.API.Create for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:18:13.328018  491995 client.go:173] LocalClient.Create starting
	I1108 10:18:13.328098  491995 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:18:13.328142  491995 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:13.328159  491995 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:13.328215  491995 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:18:13.328249  491995 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:13.328263  491995 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:13.328634  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:18:13.351041  491995 cli_runner.go:211] docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:18:13.351136  491995 network_create.go:284] running [docker network inspect default-k8s-diff-port-689864] to gather additional debugging logs...
	I1108 10:18:13.351167  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864
	W1108 10:18:13.368226  491995 cli_runner.go:211] docker network inspect default-k8s-diff-port-689864 returned with exit code 1
	I1108 10:18:13.368261  491995 network_create.go:287] error running [docker network inspect default-k8s-diff-port-689864]: docker network inspect default-k8s-diff-port-689864: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-689864 not found
	I1108 10:18:13.368277  491995 network_create.go:289] output of [docker network inspect default-k8s-diff-port-689864]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-689864 not found
	
	** /stderr **
	I1108 10:18:13.368398  491995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:18:13.388496  491995 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:18:13.388879  491995 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:18:13.389178  491995 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:18:13.389493  491995 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-805d16fd7168 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:6d:98:a6:aa:ba} reservation:<nil>}
	I1108 10:18:13.389916  491995 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a828f0}
	I1108 10:18:13.389942  491995 network_create.go:124] attempt to create docker network default-k8s-diff-port-689864 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:18:13.389997  491995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 default-k8s-diff-port-689864
	I1108 10:18:13.455204  491995 network_create.go:108] docker network default-k8s-diff-port-689864 192.168.85.0/24 created
	I1108 10:18:13.455240  491995 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-689864" container
	I1108 10:18:13.455310  491995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:18:13.479412  491995 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-689864 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:18:13.497491  491995 oci.go:103] Successfully created a docker volume default-k8s-diff-port-689864
	I1108 10:18:13.497588  491995 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-689864-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --entrypoint /usr/bin/test -v default-k8s-diff-port-689864:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:18:14.064283  491995 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-689864
	I1108 10:18:14.064344  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:14.064365  491995 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:18:14.064435  491995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-689864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:18:17.879475  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:19.881652  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:22.382108  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:18.468059  491995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-689864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403566454s)
	I1108 10:18:18.468091  491995 kic.go:203] duration metric: took 4.40372241s to extract preloaded images to volume ...
	W1108 10:18:18.468239  491995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:18:18.468367  491995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:18:18.522177  491995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-689864 --name default-k8s-diff-port-689864 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --network default-k8s-diff-port-689864 --ip 192.168.85.2 --volume default-k8s-diff-port-689864:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:18:18.856824  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Running}}
	I1108 10:18:18.880848  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:18.906824  491995 cli_runner.go:164] Run: docker exec default-k8s-diff-port-689864 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:18:18.959988  491995 oci.go:144] the created container "default-k8s-diff-port-689864" has a running status.
	I1108 10:18:18.960021  491995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa...
	I1108 10:18:19.459360  491995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:18:19.481333  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:19.504517  491995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:18:19.504536  491995 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-689864 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:18:19.559385  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:19.580152  491995 machine.go:94] provisionDockerMachine start ...
	I1108 10:18:19.580256  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:19.601589  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:19.601908  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:19.601918  491995 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:18:19.789112  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-689864
	
	I1108 10:18:19.789135  491995 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-689864"
	I1108 10:18:19.789196  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:19.809214  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:19.809535  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:19.809548  491995 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-689864 && echo "default-k8s-diff-port-689864" | sudo tee /etc/hostname
	I1108 10:18:20.026798  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-689864
	
	I1108 10:18:20.026966  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.053270  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:20.053611  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:20.053631  491995 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-689864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-689864/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-689864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:18:20.221594  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:18:20.221623  491995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:18:20.221643  491995 ubuntu.go:190] setting up certificates
	I1108 10:18:20.221653  491995 provision.go:84] configureAuth start
	I1108 10:18:20.221711  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:20.241228  491995 provision.go:143] copyHostCerts
	I1108 10:18:20.241296  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:18:20.241311  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:18:20.241392  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:18:20.241485  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:18:20.241495  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:18:20.241522  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:18:20.241588  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:18:20.241598  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:18:20.241627  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:18:20.241690  491995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-689864 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-689864 localhost minikube]
	I1108 10:18:20.728216  491995 provision.go:177] copyRemoteCerts
	I1108 10:18:20.728297  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:18:20.728342  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.746428  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:20.853107  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 10:18:20.882572  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:18:20.903183  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:18:20.922001  491995 provision.go:87] duration metric: took 700.325829ms to configureAuth
	I1108 10:18:20.922081  491995 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:18:20.922285  491995 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:20.922401  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.939815  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:20.940137  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:20.940159  491995 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:18:21.291165  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:18:21.291190  491995 machine.go:97] duration metric: took 1.711018752s to provisionDockerMachine
	I1108 10:18:21.291200  491995 client.go:176] duration metric: took 7.963170255s to LocalClient.Create
	I1108 10:18:21.291214  491995 start.go:167] duration metric: took 7.963236051s to libmachine.API.Create "default-k8s-diff-port-689864"
	I1108 10:18:21.291222  491995 start.go:293] postStartSetup for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:18:21.291240  491995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:18:21.291305  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:18:21.291348  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.309720  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.412817  491995 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:18:21.416102  491995 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:18:21.416132  491995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:18:21.416144  491995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:18:21.416197  491995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:18:21.416287  491995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:18:21.416390  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:18:21.423691  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:18:21.442330  491995 start.go:296] duration metric: took 151.090203ms for postStartSetup
	I1108 10:18:21.442719  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:21.459627  491995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json ...
	I1108 10:18:21.460006  491995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:18:21.460080  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.479312  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.581775  491995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:18:21.586221  491995 start.go:128] duration metric: took 8.261762779s to createHost
	I1108 10:18:21.586243  491995 start.go:83] releasing machines lock for "default-k8s-diff-port-689864", held for 8.261887212s
	I1108 10:18:21.586312  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:21.602200  491995 ssh_runner.go:195] Run: cat /version.json
	I1108 10:18:21.602236  491995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:18:21.602308  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.602333  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.626596  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.629601  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.728637  491995 ssh_runner.go:195] Run: systemctl --version
	I1108 10:18:21.820657  491995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:18:21.867007  491995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:18:21.872714  491995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:18:21.872831  491995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:18:21.902637  491995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:18:21.902713  491995 start.go:496] detecting cgroup driver to use...
	I1108 10:18:21.902762  491995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:18:21.902826  491995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:18:21.920794  491995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:18:21.934100  491995 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:18:21.934216  491995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:18:21.951740  491995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:18:21.974156  491995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:18:22.109393  491995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:18:22.245897  491995 docker.go:234] disabling docker service ...
	I1108 10:18:22.245986  491995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:18:22.269759  491995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:18:22.283616  491995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:18:22.400637  491995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:18:22.526057  491995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:18:22.539835  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:18:22.553698  491995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:18:22.553813  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.563034  491995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:18:22.563136  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.571950  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.580743  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.590037  491995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:18:22.598269  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.606955  491995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.620240  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.629550  491995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:18:22.636868  491995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:18:22.644413  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:22.752331  491995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:18:22.896747  491995 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:18:22.896818  491995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:18:22.900701  491995 start.go:564] Will wait 60s for crictl version
	I1108 10:18:22.900770  491995 ssh_runner.go:195] Run: which crictl
	I1108 10:18:22.904413  491995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:18:22.933945  491995 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:18:22.934042  491995 ssh_runner.go:195] Run: crio --version
	I1108 10:18:22.963771  491995 ssh_runner.go:195] Run: crio --version
	I1108 10:18:22.999042  491995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:18:23.002930  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:18:23.020485  491995 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:18:23.024979  491995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:18:23.034912  491995 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:18:23.035067  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:23.035129  491995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:18:23.074217  491995 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:18:23.074243  491995 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:18:23.074306  491995 ssh_runner.go:195] Run: sudo crictl images --output json
	W1108 10:18:24.879951  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:26.880424  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:23.107128  491995 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:18:23.107165  491995 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:18:23.107173  491995 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1108 10:18:23.107260  491995 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-689864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:18:23.107344  491995 ssh_runner.go:195] Run: crio config
	I1108 10:18:23.161882  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:23.161905  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:23.161928  491995 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:18:23.161969  491995 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-689864 NodeName:default-k8s-diff-port-689864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:18:23.162146  491995 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-689864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:18:23.162236  491995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:18:23.170052  491995 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:18:23.170148  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:18:23.177962  491995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 10:18:23.191488  491995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:18:23.204113  491995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1108 10:18:23.217853  491995 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:18:23.221612  491995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:18:23.231958  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:23.354239  491995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:18:23.371983  491995 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864 for IP: 192.168.85.2
	I1108 10:18:23.372050  491995 certs.go:195] generating shared ca certs ...
	I1108 10:18:23.372085  491995 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:23.372268  491995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:18:23.372363  491995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:18:23.372387  491995 certs.go:257] generating profile certs ...
	I1108 10:18:23.372481  491995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key
	I1108 10:18:23.372531  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt with IP's: []
	I1108 10:18:24.313614  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt ...
	I1108 10:18:24.313646  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: {Name:mk584b6ad495780d334eef820aa9b9e2b0551705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:24.313850  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key ...
	I1108 10:18:24.313867  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key: {Name:mkcecb8a164bb53aa73fe055342a3d366b356d1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:24.313961  491995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4
	I1108 10:18:24.313979  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:18:25.323297  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 ...
	I1108 10:18:25.323329  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4: {Name:mk2d84e0ad50444118242d9cba27a875ae719acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:25.323516  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4 ...
	I1108 10:18:25.323531  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4: {Name:mk98ff19cf78cb42b414fddad5ad6b814616f419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:25.323619  491995 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt
	I1108 10:18:25.323709  491995 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key
	I1108 10:18:25.323771  491995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key
	I1108 10:18:25.323789  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt with IP's: []
	I1108 10:18:26.355493  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt ...
	I1108 10:18:26.355524  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt: {Name:mk94af383d9b89198d16b000614c82a44c632ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:26.355714  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key ...
	I1108 10:18:26.355729  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key: {Name:mk473629d65a136e1a5d7b6828688f90b070a87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:26.355924  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:18:26.355967  491995 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:18:26.355982  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:18:26.356006  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:18:26.356061  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:18:26.356089  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:18:26.356138  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:18:26.356778  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:18:26.376632  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:18:26.398810  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:18:26.418970  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:18:26.437464  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 10:18:26.456773  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:18:26.479043  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:18:26.500772  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:18:26.522752  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:18:26.542282  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:18:26.560773  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:18:26.578902  491995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:18:26.591899  491995 ssh_runner.go:195] Run: openssl version
	I1108 10:18:26.598469  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:18:26.606896  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.610589  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.610659  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.651237  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:18:26.659450  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:18:26.668057  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.671775  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.671848  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.712632  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:18:26.721140  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:18:26.729327  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.733126  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.733190  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.776644  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:18:26.785131  491995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:18:26.788779  491995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:18:26.788849  491995 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:18:26.788968  491995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:18:26.789036  491995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:18:26.814833  491995 cri.go:89] found id: ""
	I1108 10:18:26.814933  491995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:18:26.822691  491995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:18:26.830540  491995 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:18:26.830639  491995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:18:26.839051  491995 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:18:26.839067  491995 kubeadm.go:158] found existing configuration files:
	
	I1108 10:18:26.839117  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 10:18:26.847080  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:18:26.847151  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:18:26.854414  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 10:18:26.867972  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:18:26.868078  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:18:26.875538  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 10:18:26.883935  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:18:26.884022  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:18:26.891627  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 10:18:26.899579  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:18:26.899655  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:18:26.906831  491995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:18:26.969419  491995 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:18:26.969710  491995 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:18:27.044325  491995 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1108 10:18:28.880675  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:30.896440  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:33.379783  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:34.879989  488441 pod_ready.go:94] pod "coredns-66bc5c9577-t2frl" is "Ready"
	I1108 10:18:34.880013  488441 pod_ready.go:86] duration metric: took 38.506205954s for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.883753  488441 pod_ready.go:83] waiting for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.889849  488441 pod_ready.go:94] pod "etcd-embed-certs-606645" is "Ready"
	I1108 10:18:34.889931  488441 pod_ready.go:86] duration metric: took 6.15501ms for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.892656  488441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.898935  488441 pod_ready.go:94] pod "kube-apiserver-embed-certs-606645" is "Ready"
	I1108 10:18:34.899013  488441 pod_ready.go:86] duration metric: took 6.325818ms for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.902366  488441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.077484  488441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-606645" is "Ready"
	I1108 10:18:35.077557  488441 pod_ready.go:86] duration metric: took 175.114847ms for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.278380  488441 pod_ready.go:83] waiting for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.677442  488441 pod_ready.go:94] pod "kube-proxy-tvxrb" is "Ready"
	I1108 10:18:35.677510  488441 pod_ready.go:86] duration metric: took 399.10109ms for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.879279  488441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:36.277091  488441 pod_ready.go:94] pod "kube-scheduler-embed-certs-606645" is "Ready"
	I1108 10:18:36.277117  488441 pod_ready.go:86] duration metric: took 397.774163ms for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:36.277129  488441 pod_ready.go:40] duration metric: took 39.906951389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:18:36.375615  488441 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:18:36.379635  488441 out.go:179] * Done! kubectl is now configured to use "embed-certs-606645" cluster and "default" namespace by default
	I1108 10:18:43.851459  491995 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:18:43.851522  491995 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:18:43.851643  491995 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:18:43.851718  491995 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:18:43.851759  491995 kubeadm.go:319] OS: Linux
	I1108 10:18:43.851811  491995 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:18:43.851873  491995 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:18:43.851961  491995 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:18:43.852025  491995 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:18:43.852076  491995 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:18:43.852136  491995 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:18:43.852196  491995 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:18:43.852249  491995 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:18:43.852299  491995 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:18:43.852376  491995 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:18:43.852480  491995 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:18:43.852573  491995 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:18:43.852639  491995 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:18:43.855832  491995 out.go:252]   - Generating certificates and keys ...
	I1108 10:18:43.855928  491995 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:18:43.856002  491995 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:18:43.856091  491995 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:18:43.856158  491995 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:18:43.856238  491995 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:18:43.856301  491995 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:18:43.856361  491995 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:18:43.856507  491995 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-689864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:18:43.856568  491995 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:18:43.856713  491995 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-689864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:18:43.856787  491995 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:18:43.856862  491995 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:18:43.856943  491995 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:18:43.857023  491995 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:18:43.857083  491995 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:18:43.857153  491995 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:18:43.857225  491995 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:18:43.857304  491995 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:18:43.857372  491995 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:18:43.857501  491995 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:18:43.857588  491995 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:18:43.860523  491995 out.go:252]   - Booting up control plane ...
	I1108 10:18:43.860640  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:18:43.860750  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:18:43.860830  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:18:43.860997  491995 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:18:43.861156  491995 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:18:43.861322  491995 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:18:43.861419  491995 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:18:43.861461  491995 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:18:43.861622  491995 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:18:43.861753  491995 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:18:43.861831  491995 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.49364ms
	I1108 10:18:43.861948  491995 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:18:43.862052  491995 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1108 10:18:43.862172  491995 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:18:43.862260  491995 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:18:43.862352  491995 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.779165803s
	I1108 10:18:43.862426  491995 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.098441104s
	I1108 10:18:43.862506  491995 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.00156548s
	I1108 10:18:43.862623  491995 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:18:43.862786  491995 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:18:43.862876  491995 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:18:43.863069  491995 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-689864 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:18:43.863128  491995 kubeadm.go:319] [bootstrap-token] Using token: abieie.xjlv7vvaabvnphsl
	I1108 10:18:43.866068  491995 out.go:252]   - Configuring RBAC rules ...
	I1108 10:18:43.866181  491995 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:18:43.866277  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:18:43.866424  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:18:43.866564  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:18:43.866686  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:18:43.866776  491995 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:18:43.866903  491995 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:18:43.866950  491995 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:18:43.866998  491995 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:18:43.867003  491995 kubeadm.go:319] 
	I1108 10:18:43.867066  491995 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:18:43.867070  491995 kubeadm.go:319] 
	I1108 10:18:43.867150  491995 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:18:43.867155  491995 kubeadm.go:319] 
	I1108 10:18:43.867181  491995 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:18:43.867242  491995 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:18:43.867296  491995 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:18:43.867300  491995 kubeadm.go:319] 
	I1108 10:18:43.867357  491995 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:18:43.867361  491995 kubeadm.go:319] 
	I1108 10:18:43.867411  491995 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:18:43.867415  491995 kubeadm.go:319] 
	I1108 10:18:43.867470  491995 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:18:43.867549  491995 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:18:43.867634  491995 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:18:43.867639  491995 kubeadm.go:319] 
	I1108 10:18:43.867728  491995 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:18:43.867808  491995 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:18:43.867812  491995 kubeadm.go:319] 
	I1108 10:18:43.867901  491995 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token abieie.xjlv7vvaabvnphsl \
	I1108 10:18:43.868009  491995 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:18:43.868030  491995 kubeadm.go:319] 	--control-plane 
	I1108 10:18:43.868034  491995 kubeadm.go:319] 
	I1108 10:18:43.868124  491995 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:18:43.868128  491995 kubeadm.go:319] 
	I1108 10:18:43.868215  491995 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token abieie.xjlv7vvaabvnphsl \
	I1108 10:18:43.868342  491995 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:18:43.868352  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:43.868359  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:43.873227  491995 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:18:43.876155  491995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:18:43.880490  491995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:18:43.880512  491995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:18:43.894549  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:18:44.213108  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:44.213214  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-689864 minikube.k8s.io/updated_at=2025_11_08T10_18_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=default-k8s-diff-port-689864 minikube.k8s.io/primary=true
	I1108 10:18:44.213250  491995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:18:44.461378  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:44.461461  491995 ops.go:34] apiserver oom_adj: -16
	I1108 10:18:44.961801  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:45.461494  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:45.961765  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:46.462281  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:46.961587  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:47.462127  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:47.962132  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:48.094782  491995 kubeadm.go:1114] duration metric: took 3.881728738s to wait for elevateKubeSystemPrivileges
	I1108 10:18:48.094812  491995 kubeadm.go:403] duration metric: took 21.305967559s to StartCluster
	I1108 10:18:48.094830  491995 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:48.094891  491995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:18:48.101981  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:48.102323  491995 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:18:48.102730  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:18:48.103032  491995 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:48.103072  491995 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:18:48.103215  491995 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-689864"
	I1108 10:18:48.103249  491995 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-689864"
	I1108 10:18:48.103275  491995 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:18:48.103745  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.104701  491995 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-689864"
	I1108 10:18:48.104730  491995 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-689864"
	I1108 10:18:48.105210  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.105383  491995 out.go:179] * Verifying Kubernetes components...
	I1108 10:18:48.107590  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:48.155221  491995 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:18:48.155418  491995 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-689864"
	I1108 10:18:48.155452  491995 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:18:48.155884  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.162161  491995 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:18:48.162183  491995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:18:48.162254  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:48.194120  491995 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:18:48.194142  491995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:18:48.194206  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:48.208318  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:48.232387  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:48.628059  491995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:18:48.724782  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:18:48.724954  491995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:18:48.771786  491995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:18:49.439442  491995 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:18:49.439670  491995 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:18:49.860009  491995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.088126006s)
	I1108 10:18:49.863276  491995 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.870878826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.88878531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.897466734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.9273217Z" level=info msg="Created container 71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper" id=ecde474d-c635-4031-ac14-e407d3d79206 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.933310989Z" level=info msg="Starting container: 71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013" id=a933eaec-f6d5-415c-aed6-88b49e29e602 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.93960159Z" level=info msg="Started container" PID=1636 containerID=71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper id=a933eaec-f6d5-415c-aed6-88b49e29e602 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9
	Nov 08 10:18:30 embed-certs-606645 conmon[1634]: conmon 71f497b81ff8118cbaa1 <ninfo>: container 1636 exited with status 1
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.093168274Z" level=info msg="Removing container: 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.108713744Z" level=info msg="Error loading conmon cgroup of container 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74: cgroup deleted" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.11915941Z" level=info msg="Removed container 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.026230059Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031291685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031332448Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031350319Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.03463679Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.034872763Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.034956604Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.045881528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.046042859Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.046188239Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.049969008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.05011273Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.050190105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.055411806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.055550055Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	71f497b81ff81       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   bb9583b4183e6       dashboard-metrics-scraper-6ffb444bf9-qxk4q   kubernetes-dashboard
	c20664f059121       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   7e2b7eb23d85f       storage-provisioner                          kube-system
	01bb45ef7ee36       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   ae02fc92dee49       kubernetes-dashboard-855c9754f9-chddn        kubernetes-dashboard
	042edc78a1ba0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   969929b521f3f       coredns-66bc5c9577-t2frl                     kube-system
	6810790b010f4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   01da0522f2b41       busybox                                      default
	442e9f54ea9d7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   e68b4f36094f1       kube-proxy-tvxrb                             kube-system
	e0039c06b9d3e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   7e2b7eb23d85f       storage-provisioner                          kube-system
	391f1b0171025       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   14d0d300a14b6       kindnet-tb5h7                                kube-system
	5006a51562d78       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3b1188a724fb7       kube-controller-manager-embed-certs-606645   kube-system
	2298a2b9d3c1e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c50697c11e30b       kube-scheduler-embed-certs-606645            kube-system
	08e54b1979953       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9eea23c3396e0       kube-apiserver-embed-certs-606645            kube-system
	c8f3c7bba121f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b377b700040f4       etcd-embed-certs-606645                      kube-system
	
	
	==> coredns [042edc78a1ba06a2c62964d4f6528d68d583b2b7f403c53d71c4fee80cc08052] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44762 - 17640 "HINFO IN 2755601552339447740.1591408011964283048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017086707s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-606645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-606645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-606645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-606645
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-606645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                64b557bb-52b3-4c19-9c89-a18ac4cd988b
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-t2frl                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-606645                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-tb5h7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-606645             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-606645    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-tvxrb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-606645             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qxk4q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-chddn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-606645 event: Registered Node embed-certs-606645 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-606645 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-606645 event: Registered Node embed-certs-606645 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8f3c7bba121ffe6cb77869768c5c4e9a6be9275e646d287e9e1c92fbad9874a] <==
	{"level":"warn","ts":"2025-11-08T10:17:53.310611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.333994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.373296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.376770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.408081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.421476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.441662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.484270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.489351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.505809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.543361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.549089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.572776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.584421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.602276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.620989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.638730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.671077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.689699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.721613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.763625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.788293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.811607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.828438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.894577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:51 up  3:01,  0 user,  load average: 3.52, 3.78, 2.87
	Linux embed-certs-606645 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [391f1b0171025f124806525a7d27c429a750dc65ffcdaded79fc59e096061d09] <==
	I1108 10:17:56.818930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:17:56.819248       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:17:56.819433       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:17:56.819447       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:17:56.819463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:17:57.025614       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:17:57.025641       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:17:57.025650       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:17:57.026302       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:18:27.025862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:18:27.025933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:18:27.026092       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:18:27.027188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:18:28.625765       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:18:28.625804       1 metrics.go:72] Registering metrics
	I1108 10:18:28.625873       1 controller.go:711] "Syncing nftables rules"
	I1108 10:18:37.025836       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:18:37.025956       1 main.go:301] handling current node
	I1108 10:18:47.032442       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:18:47.032475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08e54b19799530c7e6d595805299e60c4e547af1744c8361f316134cbbe2a926] <==
	I1108 10:17:54.753036       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:17:54.763882       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:17:54.807315       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:17:54.820769       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:17:54.845962       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:17:54.845994       1 policy_source.go:240] refreshing policies
	I1108 10:17:54.846204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:17:54.846217       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:17:54.846302       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:17:54.846790       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:17:54.854209       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:17:54.854305       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:17:54.875687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:17:54.886056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:17:55.386575       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:17:55.457304       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:17:55.488564       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:17:55.501041       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:17:55.515576       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:17:55.561478       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:17:55.605582       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.75.35"}
	I1108 10:17:55.647125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.138.71"}
	I1108 10:17:58.367375       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:17:58.615029       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:17:58.713934       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5006a51562d78f245738f517395a022d4af9f17acc61f08716bf0611005b63d5] <==
	I1108 10:17:58.197497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:17:58.202646       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:17:58.203892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:17:58.203936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:17:58.204044       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:17:58.204119       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-606645"
	I1108 10:17:58.204169       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:17:58.207508       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:17:58.207723       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:17:58.208077       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:17:58.208194       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:17:58.209632       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:17:58.209691       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:17:58.210385       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:17:58.210487       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:17:58.210652       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:17:58.214164       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:17:58.217437       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:17:58.222725       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:17:58.222741       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:17:58.225570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:17:58.233410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:17:58.233517       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:17:58.233576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:17:58.236948       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [442e9f54ea9d773c0c532faed6983236644cf9b2a7b140f49dbc444185e223a1] <==
	I1108 10:17:56.869421       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:17:56.969599       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:17:57.071043       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:17:57.071078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:17:57.071145       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:17:57.128358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:17:57.128412       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:17:57.132703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:17:57.133030       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:17:57.133048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:57.134306       1 config.go:200] "Starting service config controller"
	I1108 10:17:57.134327       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:17:57.134344       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:17:57.134348       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:17:57.134358       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:17:57.134362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:17:57.135001       1 config.go:309] "Starting node config controller"
	I1108 10:17:57.135019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:17:57.135027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:17:57.234450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:17:57.234494       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:17:57.234523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2298a2b9d3c1ef0e53019c3afbbfde3d06e6f3fe8557c487e2ea43cb7b855e00] <==
	I1108 10:17:54.722494       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:54.744766       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:17:54.745010       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:54.745036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:54.745060       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 10:17:54.774768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:17:54.774984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:17:54.775033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:17:54.775068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:17:54.775368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:17:54.775420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:17:54.775470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:17:54.775541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:17:54.775593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:17:54.775709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:17:54.775753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:17:54.778455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:17:54.785414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:17:54.785506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:17:54.785568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:17:54.785666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:17:54.785718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:17:54.785761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:17:54.809367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:17:56.345390       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929264     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbl7\" (UniqueName: \"kubernetes.io/projected/aae13813-227e-4300-9a66-f13600fe1537-kube-api-access-qlbl7\") pod \"kubernetes-dashboard-855c9754f9-chddn\" (UID: \"aae13813-227e-4300-9a66-f13600fe1537\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chddn"
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929287     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/05d6a2a5-b1ee-4b71-8c85-948aad881f39-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qxk4q\" (UID: \"05d6a2a5-b1ee-4b71-8c85-948aad881f39\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q"
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929314     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9mc\" (UniqueName: \"kubernetes.io/projected/05d6a2a5-b1ee-4b71-8c85-948aad881f39-kube-api-access-5s9mc\") pod \"dashboard-metrics-scraper-6ffb444bf9-qxk4q\" (UID: \"05d6a2a5-b1ee-4b71-8c85-948aad881f39\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q"
	Nov 08 10:17:59 embed-certs-606645 kubelet[774]: W1108 10:17:59.144524     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5 WatchSource:0}: Error finding container ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5: Status 404 returned error can't find the container with id ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5
	Nov 08 10:17:59 embed-certs-606645 kubelet[774]: W1108 10:17:59.166934     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9 WatchSource:0}: Error finding container bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9: Status 404 returned error can't find the container with id bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9
	Nov 08 10:18:04 embed-certs-606645 kubelet[774]: I1108 10:18:04.571472     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:18:08 embed-certs-606645 kubelet[774]: I1108 10:18:08.894107     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chddn" podStartSLOduration=3.982725595 podStartE2EDuration="10.894087004s" podCreationTimestamp="2025-11-08 10:17:58 +0000 UTC" firstStartedPulling="2025-11-08 10:17:59.149742581 +0000 UTC m=+9.604112721" lastFinishedPulling="2025-11-08 10:18:06.0611039 +0000 UTC m=+16.515474130" observedRunningTime="2025-11-08 10:18:07.014985188 +0000 UTC m=+17.469355353" watchObservedRunningTime="2025-11-08 10:18:08.894087004 +0000 UTC m=+19.348457136"
	Nov 08 10:18:12 embed-certs-606645 kubelet[774]: I1108 10:18:12.016402     774 scope.go:117] "RemoveContainer" containerID="81383ae86c354f0cb3745af745bc02446fbda28cded31f305950f5ebd9cfe7cb"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: I1108 10:18:13.022775     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: I1108 10:18:13.023358     774 scope.go:117] "RemoveContainer" containerID="81383ae86c354f0cb3745af745bc02446fbda28cded31f305950f5ebd9cfe7cb"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: E1108 10:18:13.031008     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:14 embed-certs-606645 kubelet[774]: I1108 10:18:14.027050     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:14 embed-certs-606645 kubelet[774]: E1108 10:18:14.027221     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:18 embed-certs-606645 kubelet[774]: I1108 10:18:18.393692     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:18 embed-certs-606645 kubelet[774]: E1108 10:18:18.393895     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:27 embed-certs-606645 kubelet[774]: I1108 10:18:27.072140     774 scope.go:117] "RemoveContainer" containerID="e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028"
	Nov 08 10:18:30 embed-certs-606645 kubelet[774]: I1108 10:18:30.866859     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: I1108 10:18:31.087634     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: I1108 10:18:31.088241     774 scope.go:117] "RemoveContainer" containerID="71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: E1108 10:18:31.089971     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:38 embed-certs-606645 kubelet[774]: I1108 10:18:38.393936     774 scope.go:117] "RemoveContainer" containerID="71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	Nov 08 10:18:38 embed-certs-606645 kubelet[774]: E1108 10:18:38.394600     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:48 embed-certs-606645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:18:49 embed-certs-606645 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:18:49 embed-certs-606645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01bb45ef7ee363146702b3f88b829650308e0f0fbd58f03c70b08a1236b6a4e3] <==
	2025/11/08 10:18:06 Using namespace: kubernetes-dashboard
	2025/11/08 10:18:06 Using in-cluster config to connect to apiserver
	2025/11/08 10:18:06 Using secret token for csrf signing
	2025/11/08 10:18:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:18:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:18:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:18:06 Generating JWE encryption key
	2025/11/08 10:18:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:18:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:18:06 Initializing JWE encryption key from synchronized object
	2025/11/08 10:18:06 Creating in-cluster Sidecar client
	2025/11/08 10:18:06 Serving insecurely on HTTP port: 9090
	2025/11/08 10:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:18:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:18:06 Starting overwatch
	
	
	==> storage-provisioner [c20664f059121df02ea619e301151ea7513e1dd1eb20d1419ce1a514d6ca58da] <==
	I1108 10:18:27.150125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:18:27.175170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:18:27.176463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:18:27.180707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:30.635970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:34.910742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:38.509471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:41.562667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.585026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.590782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:44.590916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:18:44.591074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1!
	I1108 10:18:44.592597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87e0990b-37ee-4c3a-94da-724d0f4a2331", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1 became leader
	W1108 10:18:44.599321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.603892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:44.691920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1!
	W1108 10:18:46.607129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:46.612137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:48.615949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:48.628289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:50.631790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:50.641669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028] <==
	I1108 10:17:56.737791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:18:26.740002       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-606645 -n embed-certs-606645: exit status 2 (374.773122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-606645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-606645
helpers_test.go:243: (dbg) docker inspect embed-certs-606645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	        "Created": "2025-11-08T10:15:58.52351748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:17:42.778905003Z",
	            "FinishedAt": "2025-11-08T10:17:41.909900316Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hostname",
	        "HostsPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/hosts",
	        "LogPath": "/var/lib/docker/containers/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431-json.log",
	        "Name": "/embed-certs-606645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-606645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-606645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431",
	                "LowerDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6ddf729d627cc1651b41c68c56f37d0b0850128b25abe98088ffa2dc66fea31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-606645",
	                "Source": "/var/lib/docker/volumes/embed-certs-606645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-606645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-606645",
	                "name.minikube.sigs.k8s.io": "embed-certs-606645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "faa617a4783710891c93eee6236d41e2924c207fb9c6e0563787ef9593a56e76",
	            "SandboxKey": "/var/run/docker/netns/faa617a47837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-606645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:ed:5e:a9:bf:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "805d16fd71681779d29643ac47fdf579dc44f7ad5660dcf2f7e7941c9bae9d2a",
	                    "EndpointID": "ec8c342a4e15387ea3d6ec0a8d4525bf8090bcb688672917e2a7ec51333805bb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-606645",
	                        "d42979033f3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645: exit status 2 (362.714198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-606645 logs -n 25: (1.329519041s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:14 UTC │ 08 Nov 25 10:14 UTC │
	│ image   │ old-k8s-version-332573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-332573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │                     │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:18:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:18:13.078273  491995 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:18:13.078454  491995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:13.078485  491995 out.go:374] Setting ErrFile to fd 2...
	I1108 10:18:13.078506  491995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:13.078780  491995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:18:13.079284  491995 out.go:368] Setting JSON to false
	I1108 10:18:13.081035  491995 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10842,"bootTime":1762586251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:18:13.081153  491995 start.go:143] virtualization:  
	I1108 10:18:13.084897  491995 out.go:179] * [default-k8s-diff-port-689864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:18:13.089014  491995 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:18:13.089112  491995 notify.go:221] Checking for updates...
	I1108 10:18:13.095016  491995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:18:13.098639  491995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:18:13.101686  491995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:18:13.104705  491995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:18:13.107544  491995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:18:13.110987  491995 config.go:182] Loaded profile config "embed-certs-606645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:13.111093  491995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:18:13.149335  491995 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:18:13.149471  491995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:13.216593  491995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:13.207250123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:13.216702  491995 docker.go:319] overlay module found
	I1108 10:18:13.219743  491995 out.go:179] * Using the docker driver based on user configuration
	I1108 10:18:13.222592  491995 start.go:309] selected driver: docker
	I1108 10:18:13.222614  491995 start.go:930] validating driver "docker" against <nil>
	I1108 10:18:13.222628  491995 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:18:13.223426  491995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:13.279842  491995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:13.270163079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:13.279995  491995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:18:13.280234  491995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:18:13.283232  491995 out.go:179] * Using Docker driver with root privileges
	I1108 10:18:13.286162  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:13.286235  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:13.286249  491995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:18:13.286332  491995 start.go:353] cluster config:
	{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:18:13.291336  491995 out.go:179] * Starting "default-k8s-diff-port-689864" primary control-plane node in "default-k8s-diff-port-689864" cluster
	I1108 10:18:13.294175  491995 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:18:13.297089  491995 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:18:13.299918  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:13.299973  491995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:18:13.299986  491995 cache.go:59] Caching tarball of preloaded images
	I1108 10:18:13.299994  491995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:18:13.300078  491995 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:18:13.300089  491995 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:18:13.300192  491995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json ...
	I1108 10:18:13.300212  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json: {Name:mka75167d9d13eba3d9ad0cbdb5a023e5a95cceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:13.324163  491995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:18:13.324187  491995 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:18:13.324206  491995 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:18:13.324228  491995 start.go:360] acquireMachinesLock for default-k8s-diff-port-689864: {Name:mk8e02949baf85c4a0d930cca199e546b49684a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:18:13.324342  491995 start.go:364] duration metric: took 92.678µs to acquireMachinesLock for "default-k8s-diff-port-689864"
	I1108 10:18:13.324373  491995 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:18:13.324443  491995 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:18:13.381198  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:15.879273  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:13.327728  491995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:18:13.327979  491995 start.go:159] libmachine.API.Create for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:18:13.328018  491995 client.go:173] LocalClient.Create starting
	I1108 10:18:13.328098  491995 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:18:13.328142  491995 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:13.328159  491995 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:13.328215  491995 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:18:13.328249  491995 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:13.328263  491995 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:13.328634  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:18:13.351041  491995 cli_runner.go:211] docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:18:13.351136  491995 network_create.go:284] running [docker network inspect default-k8s-diff-port-689864] to gather additional debugging logs...
	I1108 10:18:13.351167  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864
	W1108 10:18:13.368226  491995 cli_runner.go:211] docker network inspect default-k8s-diff-port-689864 returned with exit code 1
	I1108 10:18:13.368261  491995 network_create.go:287] error running [docker network inspect default-k8s-diff-port-689864]: docker network inspect default-k8s-diff-port-689864: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-689864 not found
	I1108 10:18:13.368277  491995 network_create.go:289] output of [docker network inspect default-k8s-diff-port-689864]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-689864 not found
	
	** /stderr **
	I1108 10:18:13.368398  491995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:18:13.388496  491995 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:18:13.388879  491995 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:18:13.389178  491995 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:18:13.389493  491995 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-805d16fd7168 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:6d:98:a6:aa:ba} reservation:<nil>}
	I1108 10:18:13.389916  491995 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a828f0}
	I1108 10:18:13.389942  491995 network_create.go:124] attempt to create docker network default-k8s-diff-port-689864 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:18:13.389997  491995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 default-k8s-diff-port-689864
	I1108 10:18:13.455204  491995 network_create.go:108] docker network default-k8s-diff-port-689864 192.168.85.0/24 created
	I1108 10:18:13.455240  491995 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-689864" container
	I1108 10:18:13.455310  491995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:18:13.479412  491995 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-689864 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:18:13.497491  491995 oci.go:103] Successfully created a docker volume default-k8s-diff-port-689864
	I1108 10:18:13.497588  491995 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-689864-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --entrypoint /usr/bin/test -v default-k8s-diff-port-689864:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:18:14.064283  491995 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-689864
	I1108 10:18:14.064344  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:14.064365  491995 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:18:14.064435  491995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-689864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:18:17.879475  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:19.881652  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:22.382108  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:18.468059  491995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-689864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403566454s)
	I1108 10:18:18.468091  491995 kic.go:203] duration metric: took 4.40372241s to extract preloaded images to volume ...
	W1108 10:18:18.468239  491995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:18:18.468367  491995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:18:18.522177  491995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-689864 --name default-k8s-diff-port-689864 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-689864 --network default-k8s-diff-port-689864 --ip 192.168.85.2 --volume default-k8s-diff-port-689864:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:18:18.856824  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Running}}
	I1108 10:18:18.880848  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:18.906824  491995 cli_runner.go:164] Run: docker exec default-k8s-diff-port-689864 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:18:18.959988  491995 oci.go:144] the created container "default-k8s-diff-port-689864" has a running status.
	I1108 10:18:18.960021  491995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa...
	I1108 10:18:19.459360  491995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:18:19.481333  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:19.504517  491995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:18:19.504536  491995 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-689864 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:18:19.559385  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:19.580152  491995 machine.go:94] provisionDockerMachine start ...
	I1108 10:18:19.580256  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:19.601589  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:19.601908  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:19.601918  491995 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:18:19.789112  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-689864
	
	I1108 10:18:19.789135  491995 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-689864"
	I1108 10:18:19.789196  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:19.809214  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:19.809535  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:19.809548  491995 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-689864 && echo "default-k8s-diff-port-689864" | sudo tee /etc/hostname
	I1108 10:18:20.026798  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-689864
	
	I1108 10:18:20.026966  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.053270  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:20.053611  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:20.053631  491995 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-689864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-689864/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-689864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:18:20.221594  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:18:20.221623  491995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:18:20.221643  491995 ubuntu.go:190] setting up certificates
	I1108 10:18:20.221653  491995 provision.go:84] configureAuth start
	I1108 10:18:20.221711  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:20.241228  491995 provision.go:143] copyHostCerts
	I1108 10:18:20.241296  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:18:20.241311  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:18:20.241392  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:18:20.241485  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:18:20.241495  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:18:20.241522  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:18:20.241588  491995 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:18:20.241598  491995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:18:20.241627  491995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:18:20.241690  491995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-689864 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-689864 localhost minikube]
	I1108 10:18:20.728216  491995 provision.go:177] copyRemoteCerts
	I1108 10:18:20.728297  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:18:20.728342  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.746428  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:20.853107  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 10:18:20.882572  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:18:20.903183  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:18:20.922001  491995 provision.go:87] duration metric: took 700.325829ms to configureAuth
	I1108 10:18:20.922081  491995 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:18:20.922285  491995 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:20.922401  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:20.939815  491995 main.go:143] libmachine: Using SSH client type: native
	I1108 10:18:20.940137  491995 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1108 10:18:20.940159  491995 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:18:21.291165  491995 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:18:21.291190  491995 machine.go:97] duration metric: took 1.711018752s to provisionDockerMachine
	I1108 10:18:21.291200  491995 client.go:176] duration metric: took 7.963170255s to LocalClient.Create
	I1108 10:18:21.291214  491995 start.go:167] duration metric: took 7.963236051s to libmachine.API.Create "default-k8s-diff-port-689864"
	I1108 10:18:21.291222  491995 start.go:293] postStartSetup for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:18:21.291240  491995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:18:21.291305  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:18:21.291348  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.309720  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.412817  491995 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:18:21.416102  491995 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:18:21.416132  491995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:18:21.416144  491995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:18:21.416197  491995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:18:21.416287  491995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:18:21.416390  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:18:21.423691  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:18:21.442330  491995 start.go:296] duration metric: took 151.090203ms for postStartSetup
	I1108 10:18:21.442719  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:21.459627  491995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json ...
	I1108 10:18:21.460006  491995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:18:21.460080  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.479312  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.581775  491995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:18:21.586221  491995 start.go:128] duration metric: took 8.261762779s to createHost
	I1108 10:18:21.586243  491995 start.go:83] releasing machines lock for "default-k8s-diff-port-689864", held for 8.261887212s
	I1108 10:18:21.586312  491995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:18:21.602200  491995 ssh_runner.go:195] Run: cat /version.json
	I1108 10:18:21.602236  491995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:18:21.602308  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.602333  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:21.626596  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.629601  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:21.728637  491995 ssh_runner.go:195] Run: systemctl --version
	I1108 10:18:21.820657  491995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:18:21.867007  491995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:18:21.872714  491995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:18:21.872831  491995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:18:21.902637  491995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:18:21.902713  491995 start.go:496] detecting cgroup driver to use...
	I1108 10:18:21.902762  491995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:18:21.902826  491995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:18:21.920794  491995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:18:21.934100  491995 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:18:21.934216  491995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:18:21.951740  491995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:18:21.974156  491995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:18:22.109393  491995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:18:22.245897  491995 docker.go:234] disabling docker service ...
	I1108 10:18:22.245986  491995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:18:22.269759  491995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:18:22.283616  491995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:18:22.400637  491995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:18:22.526057  491995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:18:22.539835  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:18:22.553698  491995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:18:22.553813  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.563034  491995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:18:22.563136  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.571950  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.580743  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.590037  491995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:18:22.598269  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.606955  491995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.620240  491995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:18:22.629550  491995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:18:22.636868  491995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:18:22.644413  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:22.752331  491995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:18:22.896747  491995 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:18:22.896818  491995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:18:22.900701  491995 start.go:564] Will wait 60s for crictl version
	I1108 10:18:22.900770  491995 ssh_runner.go:195] Run: which crictl
	I1108 10:18:22.904413  491995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:18:22.933945  491995 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:18:22.934042  491995 ssh_runner.go:195] Run: crio --version
	I1108 10:18:22.963771  491995 ssh_runner.go:195] Run: crio --version
	I1108 10:18:22.999042  491995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:18:23.002930  491995 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:18:23.020485  491995 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:18:23.024979  491995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:18:23.034912  491995 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:18:23.035067  491995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:23.035129  491995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:18:23.074217  491995 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:18:23.074243  491995 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:18:23.074306  491995 ssh_runner.go:195] Run: sudo crictl images --output json
	W1108 10:18:24.879951  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:26.880424  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:23.107128  491995 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:18:23.107165  491995 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:18:23.107173  491995 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1108 10:18:23.107260  491995 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-689864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:18:23.107344  491995 ssh_runner.go:195] Run: crio config
	I1108 10:18:23.161882  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:23.161905  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:23.161928  491995 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:18:23.161969  491995 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-689864 NodeName:default-k8s-diff-port-689864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:18:23.162146  491995 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-689864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:18:23.162236  491995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:18:23.170052  491995 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:18:23.170148  491995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:18:23.177962  491995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 10:18:23.191488  491995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:18:23.204113  491995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1108 10:18:23.217853  491995 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:18:23.221612  491995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:18:23.231958  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:23.354239  491995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:18:23.371983  491995 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864 for IP: 192.168.85.2
	I1108 10:18:23.372050  491995 certs.go:195] generating shared ca certs ...
	I1108 10:18:23.372085  491995 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:23.372268  491995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:18:23.372363  491995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:18:23.372387  491995 certs.go:257] generating profile certs ...
	I1108 10:18:23.372481  491995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key
	I1108 10:18:23.372531  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt with IP's: []
	I1108 10:18:24.313614  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt ...
	I1108 10:18:24.313646  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: {Name:mk584b6ad495780d334eef820aa9b9e2b0551705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:24.313850  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key ...
	I1108 10:18:24.313867  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key: {Name:mkcecb8a164bb53aa73fe055342a3d366b356d1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:24.313961  491995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4
	I1108 10:18:24.313979  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:18:25.323297  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 ...
	I1108 10:18:25.323329  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4: {Name:mk2d84e0ad50444118242d9cba27a875ae719acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:25.323516  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4 ...
	I1108 10:18:25.323531  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4: {Name:mk98ff19cf78cb42b414fddad5ad6b814616f419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:25.323619  491995 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt.d58dafe4 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt
	I1108 10:18:25.323709  491995 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4 -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key
	I1108 10:18:25.323771  491995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key
	I1108 10:18:25.323789  491995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt with IP's: []
	I1108 10:18:26.355493  491995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt ...
	I1108 10:18:26.355524  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt: {Name:mk94af383d9b89198d16b000614c82a44c632ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:26.355714  491995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key ...
	I1108 10:18:26.355729  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key: {Name:mk473629d65a136e1a5d7b6828688f90b070a87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:26.355924  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:18:26.355967  491995 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:18:26.355982  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:18:26.356006  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:18:26.356061  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:18:26.356089  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:18:26.356138  491995 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:18:26.356778  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:18:26.376632  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:18:26.398810  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:18:26.418970  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:18:26.437464  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 10:18:26.456773  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:18:26.479043  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:18:26.500772  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:18:26.522752  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:18:26.542282  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:18:26.560773  491995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:18:26.578902  491995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:18:26.591899  491995 ssh_runner.go:195] Run: openssl version
	I1108 10:18:26.598469  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:18:26.606896  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.610589  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.610659  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:18:26.651237  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:18:26.659450  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:18:26.668057  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.671775  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.671848  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:18:26.712632  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:18:26.721140  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:18:26.729327  491995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.733126  491995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.733190  491995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:18:26.776644  491995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:18:26.785131  491995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:18:26.788779  491995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:18:26.788849  491995 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:18:26.788968  491995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:18:26.789036  491995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:18:26.814833  491995 cri.go:89] found id: ""
	I1108 10:18:26.814933  491995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:18:26.822691  491995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:18:26.830540  491995 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:18:26.830639  491995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:18:26.839051  491995 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:18:26.839067  491995 kubeadm.go:158] found existing configuration files:
	
	I1108 10:18:26.839117  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 10:18:26.847080  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:18:26.847151  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:18:26.854414  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 10:18:26.867972  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:18:26.868078  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:18:26.875538  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 10:18:26.883935  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:18:26.884022  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:18:26.891627  491995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 10:18:26.899579  491995 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:18:26.899655  491995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:18:26.906831  491995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:18:26.969419  491995 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:18:26.969710  491995 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:18:27.044325  491995 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1108 10:18:28.880675  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:30.896440  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	W1108 10:18:33.379783  488441 pod_ready.go:104] pod "coredns-66bc5c9577-t2frl" is not "Ready", error: <nil>
	I1108 10:18:34.879989  488441 pod_ready.go:94] pod "coredns-66bc5c9577-t2frl" is "Ready"
	I1108 10:18:34.880013  488441 pod_ready.go:86] duration metric: took 38.506205954s for pod "coredns-66bc5c9577-t2frl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.883753  488441 pod_ready.go:83] waiting for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.889849  488441 pod_ready.go:94] pod "etcd-embed-certs-606645" is "Ready"
	I1108 10:18:34.889931  488441 pod_ready.go:86] duration metric: took 6.15501ms for pod "etcd-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.892656  488441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.898935  488441 pod_ready.go:94] pod "kube-apiserver-embed-certs-606645" is "Ready"
	I1108 10:18:34.899013  488441 pod_ready.go:86] duration metric: took 6.325818ms for pod "kube-apiserver-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:34.902366  488441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.077484  488441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-606645" is "Ready"
	I1108 10:18:35.077557  488441 pod_ready.go:86] duration metric: took 175.114847ms for pod "kube-controller-manager-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.278380  488441 pod_ready.go:83] waiting for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.677442  488441 pod_ready.go:94] pod "kube-proxy-tvxrb" is "Ready"
	I1108 10:18:35.677510  488441 pod_ready.go:86] duration metric: took 399.10109ms for pod "kube-proxy-tvxrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:35.879279  488441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:36.277091  488441 pod_ready.go:94] pod "kube-scheduler-embed-certs-606645" is "Ready"
	I1108 10:18:36.277117  488441 pod_ready.go:86] duration metric: took 397.774163ms for pod "kube-scheduler-embed-certs-606645" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:18:36.277129  488441 pod_ready.go:40] duration metric: took 39.906951389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:18:36.375615  488441 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:18:36.379635  488441 out.go:179] * Done! kubectl is now configured to use "embed-certs-606645" cluster and "default" namespace by default
	I1108 10:18:43.851459  491995 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:18:43.851522  491995 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:18:43.851643  491995 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:18:43.851718  491995 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:18:43.851759  491995 kubeadm.go:319] OS: Linux
	I1108 10:18:43.851811  491995 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:18:43.851873  491995 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:18:43.851961  491995 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:18:43.852025  491995 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:18:43.852076  491995 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:18:43.852136  491995 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:18:43.852196  491995 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:18:43.852249  491995 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:18:43.852299  491995 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:18:43.852376  491995 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:18:43.852480  491995 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:18:43.852573  491995 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:18:43.852639  491995 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:18:43.855832  491995 out.go:252]   - Generating certificates and keys ...
	I1108 10:18:43.855928  491995 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:18:43.856002  491995 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:18:43.856091  491995 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:18:43.856158  491995 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:18:43.856238  491995 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:18:43.856301  491995 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:18:43.856361  491995 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:18:43.856507  491995 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-689864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:18:43.856568  491995 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:18:43.856713  491995 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-689864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:18:43.856787  491995 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:18:43.856862  491995 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:18:43.856943  491995 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:18:43.857023  491995 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:18:43.857083  491995 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:18:43.857153  491995 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:18:43.857225  491995 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:18:43.857304  491995 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:18:43.857372  491995 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:18:43.857501  491995 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:18:43.857588  491995 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:18:43.860523  491995 out.go:252]   - Booting up control plane ...
	I1108 10:18:43.860640  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:18:43.860750  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:18:43.860830  491995 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:18:43.860997  491995 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:18:43.861156  491995 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:18:43.861322  491995 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:18:43.861419  491995 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:18:43.861461  491995 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:18:43.861622  491995 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:18:43.861753  491995 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:18:43.861831  491995 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.49364ms
	I1108 10:18:43.861948  491995 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:18:43.862052  491995 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1108 10:18:43.862172  491995 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:18:43.862260  491995 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:18:43.862352  491995 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.779165803s
	I1108 10:18:43.862426  491995 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.098441104s
	I1108 10:18:43.862506  491995 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.00156548s
	I1108 10:18:43.862623  491995 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:18:43.862786  491995 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:18:43.862876  491995 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:18:43.863069  491995 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-689864 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:18:43.863128  491995 kubeadm.go:319] [bootstrap-token] Using token: abieie.xjlv7vvaabvnphsl
	I1108 10:18:43.866068  491995 out.go:252]   - Configuring RBAC rules ...
	I1108 10:18:43.866181  491995 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:18:43.866277  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:18:43.866424  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:18:43.866564  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:18:43.866686  491995 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:18:43.866776  491995 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:18:43.866903  491995 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:18:43.866950  491995 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:18:43.866998  491995 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:18:43.867003  491995 kubeadm.go:319] 
	I1108 10:18:43.867066  491995 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:18:43.867070  491995 kubeadm.go:319] 
	I1108 10:18:43.867150  491995 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:18:43.867155  491995 kubeadm.go:319] 
	I1108 10:18:43.867181  491995 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:18:43.867242  491995 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:18:43.867296  491995 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:18:43.867300  491995 kubeadm.go:319] 
	I1108 10:18:43.867357  491995 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:18:43.867361  491995 kubeadm.go:319] 
	I1108 10:18:43.867411  491995 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:18:43.867415  491995 kubeadm.go:319] 
	I1108 10:18:43.867470  491995 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:18:43.867549  491995 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:18:43.867634  491995 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:18:43.867639  491995 kubeadm.go:319] 
	I1108 10:18:43.867728  491995 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:18:43.867808  491995 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:18:43.867812  491995 kubeadm.go:319] 
	I1108 10:18:43.867901  491995 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token abieie.xjlv7vvaabvnphsl \
	I1108 10:18:43.868009  491995 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:18:43.868030  491995 kubeadm.go:319] 	--control-plane 
	I1108 10:18:43.868034  491995 kubeadm.go:319] 
	I1108 10:18:43.868124  491995 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:18:43.868128  491995 kubeadm.go:319] 
	I1108 10:18:43.868215  491995 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token abieie.xjlv7vvaabvnphsl \
	I1108 10:18:43.868342  491995 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:18:43.868352  491995 cni.go:84] Creating CNI manager for ""
	I1108 10:18:43.868359  491995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:43.873227  491995 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:18:43.876155  491995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:18:43.880490  491995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:18:43.880512  491995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:18:43.894549  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:18:44.213108  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:44.213214  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-689864 minikube.k8s.io/updated_at=2025_11_08T10_18_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=default-k8s-diff-port-689864 minikube.k8s.io/primary=true
	I1108 10:18:44.213250  491995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:18:44.461378  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:44.461461  491995 ops.go:34] apiserver oom_adj: -16
	I1108 10:18:44.961801  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:45.461494  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:45.961765  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:46.462281  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:46.961587  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:47.462127  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:47.962132  491995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:18:48.094782  491995 kubeadm.go:1114] duration metric: took 3.881728738s to wait for elevateKubeSystemPrivileges
	I1108 10:18:48.094812  491995 kubeadm.go:403] duration metric: took 21.305967559s to StartCluster
	I1108 10:18:48.094830  491995 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:48.094891  491995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:18:48.101981  491995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:48.102323  491995 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:18:48.102730  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:18:48.103032  491995 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:48.103072  491995 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:18:48.103215  491995 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-689864"
	I1108 10:18:48.103249  491995 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-689864"
	I1108 10:18:48.103275  491995 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:18:48.103745  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.104701  491995 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-689864"
	I1108 10:18:48.104730  491995 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-689864"
	I1108 10:18:48.105210  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.105383  491995 out.go:179] * Verifying Kubernetes components...
	I1108 10:18:48.107590  491995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:18:48.155221  491995 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:18:48.155418  491995 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-689864"
	I1108 10:18:48.155452  491995 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:18:48.155884  491995 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:18:48.162161  491995 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:18:48.162183  491995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:18:48.162254  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:48.194120  491995 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:18:48.194142  491995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:18:48.194206  491995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:18:48.208318  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:48.232387  491995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:18:48.628059  491995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:18:48.724782  491995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:18:48.724954  491995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:18:48.771786  491995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:18:49.439442  491995 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:18:49.439670  491995 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:18:49.860009  491995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.088126006s)
	I1108 10:18:49.863276  491995 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1108 10:18:49.866311  491995 addons.go:515] duration metric: took 1.763212172s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 10:18:49.944384  491995 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-689864" context rescaled to 1 replicas
	W1108 10:18:51.443202  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.870878826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.88878531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.897466734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.9273217Z" level=info msg="Created container 71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper" id=ecde474d-c635-4031-ac14-e407d3d79206 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.933310989Z" level=info msg="Starting container: 71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013" id=a933eaec-f6d5-415c-aed6-88b49e29e602 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:18:30 embed-certs-606645 crio[650]: time="2025-11-08T10:18:30.93960159Z" level=info msg="Started container" PID=1636 containerID=71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper id=a933eaec-f6d5-415c-aed6-88b49e29e602 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9
	Nov 08 10:18:30 embed-certs-606645 conmon[1634]: conmon 71f497b81ff8118cbaa1 <ninfo>: container 1636 exited with status 1
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.093168274Z" level=info msg="Removing container: 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.108713744Z" level=info msg="Error loading conmon cgroup of container 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74: cgroup deleted" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:31 embed-certs-606645 crio[650]: time="2025-11-08T10:18:31.11915941Z" level=info msg="Removed container 0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q/dashboard-metrics-scraper" id=f6bdbd6b-227c-41a4-a5b8-fe32f2dbf534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.026230059Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031291685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031332448Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.031350319Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.03463679Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.034872763Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.034956604Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.045881528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.046042859Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.046188239Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.049969008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.05011273Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.050190105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.055411806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:18:37 embed-certs-606645 crio[650]: time="2025-11-08T10:18:37.055550055Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	71f497b81ff81       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   bb9583b4183e6       dashboard-metrics-scraper-6ffb444bf9-qxk4q   kubernetes-dashboard
	c20664f059121       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   7e2b7eb23d85f       storage-provisioner                          kube-system
	01bb45ef7ee36       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   ae02fc92dee49       kubernetes-dashboard-855c9754f9-chddn        kubernetes-dashboard
	042edc78a1ba0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   969929b521f3f       coredns-66bc5c9577-t2frl                     kube-system
	6810790b010f4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   01da0522f2b41       busybox                                      default
	442e9f54ea9d7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   e68b4f36094f1       kube-proxy-tvxrb                             kube-system
	e0039c06b9d3e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   7e2b7eb23d85f       storage-provisioner                          kube-system
	391f1b0171025       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   14d0d300a14b6       kindnet-tb5h7                                kube-system
	5006a51562d78       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3b1188a724fb7       kube-controller-manager-embed-certs-606645   kube-system
	2298a2b9d3c1e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c50697c11e30b       kube-scheduler-embed-certs-606645            kube-system
	08e54b1979953       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9eea23c3396e0       kube-apiserver-embed-certs-606645            kube-system
	c8f3c7bba121f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b377b700040f4       etcd-embed-certs-606645                      kube-system
	
	
	==> coredns [042edc78a1ba06a2c62964d4f6528d68d583b2b7f403c53d71c4fee80cc08052] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44762 - 17640 "HINFO IN 2755601552339447740.1591408011964283048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017086707s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-606645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-606645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-606645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_16_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:16:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-606645
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:18:35 +0000   Sat, 08 Nov 2025 10:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-606645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                64b557bb-52b3-4c19-9c89-a18ac4cd988b
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-t2frl                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-606645                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-tb5h7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-606645             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-606645    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-tvxrb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-606645             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qxk4q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-chddn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-606645 event: Registered Node embed-certs-606645 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-606645 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node embed-certs-606645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node embed-certs-606645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node embed-certs-606645 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-606645 event: Registered Node embed-certs-606645 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8f3c7bba121ffe6cb77869768c5c4e9a6be9275e646d287e9e1c92fbad9874a] <==
	{"level":"warn","ts":"2025-11-08T10:17:53.310611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.333994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.373296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.376770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.408081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.421476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.441662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.484270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.489351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.505809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.543361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.549089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.572776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.584421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.602276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.620989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.638730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.671077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.689699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.721613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.763625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.788293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.811607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.828438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:17:53.894577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:54 up  3:01,  0 user,  load average: 3.52, 3.78, 2.87
	Linux embed-certs-606645 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [391f1b0171025f124806525a7d27c429a750dc65ffcdaded79fc59e096061d09] <==
	I1108 10:17:56.818930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:17:56.819248       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:17:56.819433       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:17:56.819447       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:17:56.819463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:17:57.025614       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:17:57.025641       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:17:57.025650       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:17:57.026302       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:18:27.025862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:18:27.025933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:18:27.026092       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:18:27.027188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:18:28.625765       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:18:28.625804       1 metrics.go:72] Registering metrics
	I1108 10:18:28.625873       1 controller.go:711] "Syncing nftables rules"
	I1108 10:18:37.025836       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:18:37.025956       1 main.go:301] handling current node
	I1108 10:18:47.032442       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:18:47.032475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08e54b19799530c7e6d595805299e60c4e547af1744c8361f316134cbbe2a926] <==
	I1108 10:17:54.753036       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:17:54.763882       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:17:54.807315       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:17:54.820769       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:17:54.845962       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:17:54.845994       1 policy_source.go:240] refreshing policies
	I1108 10:17:54.846204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:17:54.846217       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:17:54.846302       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:17:54.846790       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:17:54.854209       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:17:54.854305       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:17:54.875687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:17:54.886056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:17:55.386575       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:17:55.457304       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:17:55.488564       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:17:55.501041       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:17:55.515576       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:17:55.561478       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:17:55.605582       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.75.35"}
	I1108 10:17:55.647125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.138.71"}
	I1108 10:17:58.367375       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:17:58.615029       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:17:58.713934       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5006a51562d78f245738f517395a022d4af9f17acc61f08716bf0611005b63d5] <==
	I1108 10:17:58.197497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:17:58.202646       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:17:58.203892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:17:58.203936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:17:58.204044       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:17:58.204119       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-606645"
	I1108 10:17:58.204169       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:17:58.207508       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:17:58.207723       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:17:58.208077       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:17:58.208194       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:17:58.209632       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:17:58.209691       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:17:58.210385       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:17:58.210487       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:17:58.210652       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:17:58.214164       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:17:58.217437       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:17:58.222725       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:17:58.222741       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:17:58.225570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:17:58.233410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:17:58.233517       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:17:58.233576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:17:58.236948       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [442e9f54ea9d773c0c532faed6983236644cf9b2a7b140f49dbc444185e223a1] <==
	I1108 10:17:56.869421       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:17:56.969599       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:17:57.071043       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:17:57.071078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:17:57.071145       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:17:57.128358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:17:57.128412       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:17:57.132703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:17:57.133030       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:17:57.133048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:57.134306       1 config.go:200] "Starting service config controller"
	I1108 10:17:57.134327       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:17:57.134344       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:17:57.134348       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:17:57.134358       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:17:57.134362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:17:57.135001       1 config.go:309] "Starting node config controller"
	I1108 10:17:57.135019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:17:57.135027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:17:57.234450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:17:57.234494       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:17:57.234523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2298a2b9d3c1ef0e53019c3afbbfde3d06e6f3fe8557c487e2ea43cb7b855e00] <==
	I1108 10:17:54.722494       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:17:54.744766       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:17:54.745010       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:54.745036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:17:54.745060       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 10:17:54.774768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:17:54.774984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:17:54.775033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:17:54.775068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:17:54.775368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:17:54.775420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:17:54.775470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:17:54.775541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:17:54.775593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:17:54.775709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:17:54.775753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:17:54.778455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:17:54.785414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:17:54.785506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:17:54.785568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:17:54.785666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:17:54.785718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:17:54.785761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:17:54.809367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:17:56.345390       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929264     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbl7\" (UniqueName: \"kubernetes.io/projected/aae13813-227e-4300-9a66-f13600fe1537-kube-api-access-qlbl7\") pod \"kubernetes-dashboard-855c9754f9-chddn\" (UID: \"aae13813-227e-4300-9a66-f13600fe1537\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chddn"
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929287     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/05d6a2a5-b1ee-4b71-8c85-948aad881f39-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qxk4q\" (UID: \"05d6a2a5-b1ee-4b71-8c85-948aad881f39\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q"
	Nov 08 10:17:58 embed-certs-606645 kubelet[774]: I1108 10:17:58.929314     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9mc\" (UniqueName: \"kubernetes.io/projected/05d6a2a5-b1ee-4b71-8c85-948aad881f39-kube-api-access-5s9mc\") pod \"dashboard-metrics-scraper-6ffb444bf9-qxk4q\" (UID: \"05d6a2a5-b1ee-4b71-8c85-948aad881f39\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q"
	Nov 08 10:17:59 embed-certs-606645 kubelet[774]: W1108 10:17:59.144524     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5 WatchSource:0}: Error finding container ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5: Status 404 returned error can't find the container with id ae02fc92dee4944ec8227daa37cb4da3a578e559f2b4ec7e525002142dc300c5
	Nov 08 10:17:59 embed-certs-606645 kubelet[774]: W1108 10:17:59.166934     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d42979033f3b0487eecddc880f005c016a6a03d10f42eb542658c778d4821431/crio-bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9 WatchSource:0}: Error finding container bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9: Status 404 returned error can't find the container with id bb9583b4183e67dd0ebf4a8684fbc41f9148bda7f962e6305b00d5d369e3a5a9
	Nov 08 10:18:04 embed-certs-606645 kubelet[774]: I1108 10:18:04.571472     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:18:08 embed-certs-606645 kubelet[774]: I1108 10:18:08.894107     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chddn" podStartSLOduration=3.982725595 podStartE2EDuration="10.894087004s" podCreationTimestamp="2025-11-08 10:17:58 +0000 UTC" firstStartedPulling="2025-11-08 10:17:59.149742581 +0000 UTC m=+9.604112721" lastFinishedPulling="2025-11-08 10:18:06.0611039 +0000 UTC m=+16.515474130" observedRunningTime="2025-11-08 10:18:07.014985188 +0000 UTC m=+17.469355353" watchObservedRunningTime="2025-11-08 10:18:08.894087004 +0000 UTC m=+19.348457136"
	Nov 08 10:18:12 embed-certs-606645 kubelet[774]: I1108 10:18:12.016402     774 scope.go:117] "RemoveContainer" containerID="81383ae86c354f0cb3745af745bc02446fbda28cded31f305950f5ebd9cfe7cb"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: I1108 10:18:13.022775     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: I1108 10:18:13.023358     774 scope.go:117] "RemoveContainer" containerID="81383ae86c354f0cb3745af745bc02446fbda28cded31f305950f5ebd9cfe7cb"
	Nov 08 10:18:13 embed-certs-606645 kubelet[774]: E1108 10:18:13.031008     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:14 embed-certs-606645 kubelet[774]: I1108 10:18:14.027050     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:14 embed-certs-606645 kubelet[774]: E1108 10:18:14.027221     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:18 embed-certs-606645 kubelet[774]: I1108 10:18:18.393692     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:18 embed-certs-606645 kubelet[774]: E1108 10:18:18.393895     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:27 embed-certs-606645 kubelet[774]: I1108 10:18:27.072140     774 scope.go:117] "RemoveContainer" containerID="e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028"
	Nov 08 10:18:30 embed-certs-606645 kubelet[774]: I1108 10:18:30.866859     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: I1108 10:18:31.087634     774 scope.go:117] "RemoveContainer" containerID="0b68d053cfb87937c533ae0e3971466da552057e3d8b3bbf56eec5e7a8811b74"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: I1108 10:18:31.088241     774 scope.go:117] "RemoveContainer" containerID="71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	Nov 08 10:18:31 embed-certs-606645 kubelet[774]: E1108 10:18:31.089971     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:38 embed-certs-606645 kubelet[774]: I1108 10:18:38.393936     774 scope.go:117] "RemoveContainer" containerID="71f497b81ff8118cbaa183e3e21654e75ff7d6dd981234353184224f1624e013"
	Nov 08 10:18:38 embed-certs-606645 kubelet[774]: E1108 10:18:38.394600     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qxk4q_kubernetes-dashboard(05d6a2a5-b1ee-4b71-8c85-948aad881f39)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qxk4q" podUID="05d6a2a5-b1ee-4b71-8c85-948aad881f39"
	Nov 08 10:18:48 embed-certs-606645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:18:49 embed-certs-606645 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:18:49 embed-certs-606645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01bb45ef7ee363146702b3f88b829650308e0f0fbd58f03c70b08a1236b6a4e3] <==
	2025/11/08 10:18:06 Using namespace: kubernetes-dashboard
	2025/11/08 10:18:06 Using in-cluster config to connect to apiserver
	2025/11/08 10:18:06 Using secret token for csrf signing
	2025/11/08 10:18:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:18:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:18:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:18:06 Generating JWE encryption key
	2025/11/08 10:18:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:18:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:18:06 Initializing JWE encryption key from synchronized object
	2025/11/08 10:18:06 Creating in-cluster Sidecar client
	2025/11/08 10:18:06 Serving insecurely on HTTP port: 9090
	2025/11/08 10:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:18:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:18:06 Starting overwatch
	
	
	==> storage-provisioner [c20664f059121df02ea619e301151ea7513e1dd1eb20d1419ce1a514d6ca58da] <==
	I1108 10:18:27.150125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:18:27.175170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:18:27.176463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:18:27.180707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:30.635970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:34.910742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:38.509471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:41.562667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.585026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.590782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:44.590916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:18:44.591074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1!
	I1108 10:18:44.592597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87e0990b-37ee-4c3a-94da-724d0f4a2331", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1 became leader
	W1108 10:18:44.599321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:44.603892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:18:44.691920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-606645_b281edc5-a6f4-4092-a1c3-8614294ee2b1!
	W1108 10:18:46.607129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:46.612137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:48.615949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:48.628289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:50.631790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:50.641669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:52.644778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:18:52.650428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e0039c06b9d3ed0c15a3cbdb6881dcdc6c82aaadaa34cddb7cdae0c77d071028] <==
	I1108 10:17:56.737791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:18:26.740002       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-606645 -n embed-certs-606645
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-606645 -n embed-certs-606645: exit status 2 (382.213098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-606645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.550868ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-330758
helpers_test.go:243: (dbg) docker inspect newest-cni-330758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	        "Created": "2025-11-08T10:19:03.891974469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:19:03.96110692Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hosts",
	        "LogPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55-json.log",
	        "Name": "/newest-cni-330758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-330758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-330758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	                "LowerDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-330758",
	                "Source": "/var/lib/docker/volumes/newest-cni-330758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-330758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-330758",
	                "name.minikube.sigs.k8s.io": "newest-cni-330758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8be80288cb72f4dc7bacae21734bc6ee3bd4b20024e7dbac7c582b33ebbab2e2",
	            "SandboxKey": "/var/run/docker/netns/8be80288cb72",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-330758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:64:78:38:cc:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "944292dd69993087fe1b211f0e5fa77d84eca9279fd41eb0187ac090cde431bf",
	                    "EndpointID": "8ec2191f369bf38244d343fba11d5727fb5cd96d7bb4d2d9e3397b58713382e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-330758",
	                        "7ffe9198584b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25: (1.21047557s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-332573                                                                                                                                                                                                                     │ old-k8s-version-332573       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ delete  │ -p cert-expiration-328489                                                                                                                                                                                                                     │ cert-expiration-328489       │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:15 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:18:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:18:58.056021  495950 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:18:58.056210  495950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:58.056240  495950 out.go:374] Setting ErrFile to fd 2...
	I1108 10:18:58.056264  495950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:18:58.056546  495950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:18:58.057044  495950 out.go:368] Setting JSON to false
	I1108 10:18:58.058008  495950 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10887,"bootTime":1762586251,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:18:58.058102  495950 start.go:143] virtualization:  
	I1108 10:18:58.062517  495950 out.go:179] * [newest-cni-330758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:18:58.067077  495950 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:18:58.067225  495950 notify.go:221] Checking for updates...
	I1108 10:18:58.073995  495950 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:18:58.077317  495950 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	W1108 10:18:53.448869  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:18:55.944208  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:18:58.082500  495950 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:18:58.085714  495950 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:18:58.088864  495950 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:18:58.092494  495950 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:18:58.092705  495950 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:18:58.118339  495950 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:18:58.118467  495950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:58.186338  495950 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:58.176741307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:58.186444  495950 docker.go:319] overlay module found
	I1108 10:18:58.191574  495950 out.go:179] * Using the docker driver based on user configuration
	I1108 10:18:58.194712  495950 start.go:309] selected driver: docker
	I1108 10:18:58.194736  495950 start.go:930] validating driver "docker" against <nil>
	I1108 10:18:58.194750  495950 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:18:58.195484  495950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:18:58.256798  495950 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:18:58.240719623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:18:58.256990  495950 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 10:18:58.257023  495950 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 10:18:58.257318  495950 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:18:58.262118  495950 out.go:179] * Using Docker driver with root privileges
	I1108 10:18:58.265297  495950 cni.go:84] Creating CNI manager for ""
	I1108 10:18:58.265373  495950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:18:58.265387  495950 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:18:58.265469  495950 start.go:353] cluster config:
	{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:18:58.268502  495950 out.go:179] * Starting "newest-cni-330758" primary control-plane node in "newest-cni-330758" cluster
	I1108 10:18:58.271432  495950 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:18:58.274634  495950 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:18:58.277563  495950 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:58.277655  495950 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:18:58.277662  495950 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:18:58.277724  495950 cache.go:59] Caching tarball of preloaded images
	I1108 10:18:58.277913  495950 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:18:58.277941  495950 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:18:58.278092  495950 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:18:58.278133  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json: {Name:mkd27e35a7fe35b67d27f3f726d337128d84afee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:18:58.297959  495950 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:18:58.297985  495950 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:18:58.298004  495950 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:18:58.298028  495950 start.go:360] acquireMachinesLock for newest-cni-330758: {Name:mka68247f3ee22af15ad7dc6cf73067d1036d0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:18:58.298138  495950 start.go:364] duration metric: took 92.792µs to acquireMachinesLock for "newest-cni-330758"
	I1108 10:18:58.298175  495950 start.go:93] Provisioning new machine with config: &{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:18:58.298269  495950 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:18:58.301712  495950 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:18:58.301954  495950 start.go:159] libmachine.API.Create for "newest-cni-330758" (driver="docker")
	I1108 10:18:58.302001  495950 client.go:173] LocalClient.Create starting
	I1108 10:18:58.302086  495950 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:18:58.302122  495950 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:58.302142  495950 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:58.302200  495950 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:18:58.302222  495950 main.go:143] libmachine: Decoding PEM data...
	I1108 10:18:58.302236  495950 main.go:143] libmachine: Parsing certificate...
	I1108 10:18:58.302601  495950 cli_runner.go:164] Run: docker network inspect newest-cni-330758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:18:58.319495  495950 cli_runner.go:211] docker network inspect newest-cni-330758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:18:58.319586  495950 network_create.go:284] running [docker network inspect newest-cni-330758] to gather additional debugging logs...
	I1108 10:18:58.319608  495950 cli_runner.go:164] Run: docker network inspect newest-cni-330758
	W1108 10:18:58.334867  495950 cli_runner.go:211] docker network inspect newest-cni-330758 returned with exit code 1
	I1108 10:18:58.334897  495950 network_create.go:287] error running [docker network inspect newest-cni-330758]: docker network inspect newest-cni-330758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-330758 not found
	I1108 10:18:58.334925  495950 network_create.go:289] output of [docker network inspect newest-cni-330758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-330758 not found
	
	** /stderr **
	I1108 10:18:58.335028  495950 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:18:58.353875  495950 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:18:58.354254  495950 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:18:58.354495  495950 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:18:58.354945  495950 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196b980}
	I1108 10:18:58.354970  495950 network_create.go:124] attempt to create docker network newest-cni-330758 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:18:58.355025  495950 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-330758 newest-cni-330758
	I1108 10:18:58.414029  495950 network_create.go:108] docker network newest-cni-330758 192.168.76.0/24 created
	I1108 10:18:58.414061  495950 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-330758" container
	I1108 10:18:58.414150  495950 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:18:58.430547  495950 cli_runner.go:164] Run: docker volume create newest-cni-330758 --label name.minikube.sigs.k8s.io=newest-cni-330758 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:18:58.453954  495950 oci.go:103] Successfully created a docker volume newest-cni-330758
	I1108 10:18:58.454055  495950 cli_runner.go:164] Run: docker run --rm --name newest-cni-330758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-330758 --entrypoint /usr/bin/test -v newest-cni-330758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:18:59.044510  495950 oci.go:107] Successfully prepared a docker volume newest-cni-330758
	I1108 10:18:59.044559  495950 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:18:59.044579  495950 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:18:59.044661  495950 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-330758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:18:58.443326  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:00.445543  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:02.942422  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:03.812281  495950 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-330758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.767579265s)
	I1108 10:19:03.812316  495950 kic.go:203] duration metric: took 4.767732661s to extract preloaded images to volume ...
	W1108 10:19:03.812483  495950 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:19:03.812603  495950 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:19:03.876199  495950 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-330758 --name newest-cni-330758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-330758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-330758 --network newest-cni-330758 --ip 192.168.76.2 --volume newest-cni-330758:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:19:04.193350  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Running}}
	I1108 10:19:04.213295  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:04.238932  495950 cli_runner.go:164] Run: docker exec newest-cni-330758 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:19:04.294054  495950 oci.go:144] the created container "newest-cni-330758" has a running status.
	I1108 10:19:04.294081  495950 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa...
	I1108 10:19:05.335072  495950 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:19:05.356526  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:05.383494  495950 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:19:05.383528  495950 kic_runner.go:114] Args: [docker exec --privileged newest-cni-330758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:19:05.425438  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:05.444823  495950 machine.go:94] provisionDockerMachine start ...
	I1108 10:19:05.444929  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:05.465891  495950 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:05.466221  495950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1108 10:19:05.466231  495950 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:19:05.620984  495950 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:05.621009  495950 ubuntu.go:182] provisioning hostname "newest-cni-330758"
	I1108 10:19:05.621101  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:05.640455  495950 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:05.640883  495950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1108 10:19:05.640948  495950 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-330758 && echo "newest-cni-330758" | sudo tee /etc/hostname
	I1108 10:19:05.807879  495950 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:05.807982  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:05.826849  495950 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:05.827189  495950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1108 10:19:05.827213  495950 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-330758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-330758/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-330758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:19:05.977551  495950 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:19:05.977581  495950 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:19:05.977600  495950 ubuntu.go:190] setting up certificates
	I1108 10:19:05.977610  495950 provision.go:84] configureAuth start
	I1108 10:19:05.977681  495950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:05.995299  495950 provision.go:143] copyHostCerts
	I1108 10:19:05.995367  495950 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:19:05.995376  495950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:19:05.995453  495950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:19:05.995538  495950 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:19:05.995543  495950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:19:05.995678  495950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:19:05.995766  495950 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:19:05.995772  495950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:19:05.995850  495950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:19:05.995959  495950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.newest-cni-330758 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-330758]
	I1108 10:19:06.394197  495950 provision.go:177] copyRemoteCerts
	I1108 10:19:06.394280  495950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:19:06.394324  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:06.413020  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:06.517017  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:19:06.535262  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:19:06.552379  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:19:06.570884  495950 provision.go:87] duration metric: took 593.251932ms to configureAuth
	I1108 10:19:06.570910  495950 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:19:06.571111  495950 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:06.571224  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:06.588555  495950 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:06.588869  495950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1108 10:19:06.588885  495950 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:19:06.854055  495950 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:19:06.854074  495950 machine.go:97] duration metric: took 1.409232164s to provisionDockerMachine
	I1108 10:19:06.854084  495950 client.go:176] duration metric: took 8.552071816s to LocalClient.Create
	I1108 10:19:06.854097  495950 start.go:167] duration metric: took 8.552145466s to libmachine.API.Create "newest-cni-330758"
	I1108 10:19:06.854105  495950 start.go:293] postStartSetup for "newest-cni-330758" (driver="docker")
	I1108 10:19:06.854114  495950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:19:06.854199  495950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:19:06.854244  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:06.873261  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:06.980870  495950 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:19:06.984088  495950 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:19:06.984126  495950 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:19:06.984152  495950 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:19:06.984228  495950 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:19:06.984357  495950 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:19:06.984518  495950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:19:06.992312  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:19:07.012667  495950 start.go:296] duration metric: took 158.546184ms for postStartSetup
	I1108 10:19:07.013158  495950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:07.030940  495950 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:19:07.031219  495950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:19:07.031267  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:07.048183  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:07.153842  495950 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:19:07.158436  495950 start.go:128] duration metric: took 8.860153049s to createHost
	I1108 10:19:07.158461  495950 start.go:83] releasing machines lock for "newest-cni-330758", held for 8.860303516s
	I1108 10:19:07.158528  495950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:07.175456  495950 ssh_runner.go:195] Run: cat /version.json
	I1108 10:19:07.175510  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:07.175516  495950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:19:07.175586  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:07.198300  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:07.209210  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:07.305066  495950 ssh_runner.go:195] Run: systemctl --version
	I1108 10:19:07.394432  495950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:19:07.431480  495950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:19:07.435774  495950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:19:07.435845  495950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:19:07.472094  495950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:19:07.472121  495950 start.go:496] detecting cgroup driver to use...
	I1108 10:19:07.472154  495950 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:19:07.472201  495950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:19:07.489144  495950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:19:07.502703  495950 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:19:07.502762  495950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:19:07.521065  495950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:19:07.539435  495950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:19:07.665555  495950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:19:07.792665  495950 docker.go:234] disabling docker service ...
	I1108 10:19:07.792787  495950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:19:07.816937  495950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:19:07.830275  495950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:19:07.945038  495950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	W1108 10:19:04.943058  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:07.444028  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:08.077943  495950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:19:08.092806  495950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:19:08.107115  495950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:19:08.107231  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.115751  495950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:19:08.115889  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.124698  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.133551  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.142387  495950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:19:08.150806  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.160124  495950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.174102  495950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:08.183331  495950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:19:08.191035  495950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:19:08.198721  495950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:08.315317  495950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:19:08.447977  495950 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:19:08.448045  495950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:19:08.451951  495950 start.go:564] Will wait 60s for crictl version
	I1108 10:19:08.452013  495950 ssh_runner.go:195] Run: which crictl
	I1108 10:19:08.455570  495950 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:19:08.482300  495950 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:19:08.482394  495950 ssh_runner.go:195] Run: crio --version
	I1108 10:19:08.513454  495950 ssh_runner.go:195] Run: crio --version
	I1108 10:19:08.552978  495950 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:19:08.555914  495950 cli_runner.go:164] Run: docker network inspect newest-cni-330758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:19:08.572530  495950 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:19:08.576490  495950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:19:08.590246  495950 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 10:19:08.593077  495950 kubeadm.go:884] updating cluster {Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:19:08.593213  495950 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:19:08.593296  495950 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:19:08.626088  495950 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:19:08.626126  495950 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:19:08.626195  495950 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:19:08.654917  495950 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:19:08.654943  495950 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:19:08.654952  495950 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:19:08.655035  495950 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-330758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:19:08.655130  495950 ssh_runner.go:195] Run: crio config
	I1108 10:19:08.721925  495950 cni.go:84] Creating CNI manager for ""
	I1108 10:19:08.721947  495950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:08.721965  495950 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:19:08.721992  495950 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-330758 NodeName:newest-cni-330758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:19:08.722121  495950 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-330758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:19:08.722195  495950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:19:08.731396  495950 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:19:08.731465  495950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:19:08.740822  495950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:19:08.754127  495950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:19:08.767037  495950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:19:08.780198  495950 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:19:08.783971  495950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:19:08.793815  495950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:08.917130  495950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:19:08.935864  495950 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758 for IP: 192.168.76.2
	I1108 10:19:08.935897  495950 certs.go:195] generating shared ca certs ...
	I1108 10:19:08.935913  495950 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:08.936082  495950 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:19:08.936145  495950 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:19:08.936157  495950 certs.go:257] generating profile certs ...
	I1108 10:19:08.936222  495950 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.key
	I1108 10:19:08.936240  495950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.crt with IP's: []
	I1108 10:19:09.025157  495950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.crt ...
	I1108 10:19:09.025188  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.crt: {Name:mk1fc036fd865f56644c3bd71a074313f9bfa5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.025394  495950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.key ...
	I1108 10:19:09.025410  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.key: {Name:mkee977291c9ca6b79118a449c375a6f35b13654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.025510  495950 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key.8c8c918e
	I1108 10:19:09.025530  495950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt.8c8c918e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:19:09.204145  495950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt.8c8c918e ...
	I1108 10:19:09.204178  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt.8c8c918e: {Name:mkb1372ed32507a9590d5d61bc89384a4a01a607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.204372  495950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key.8c8c918e ...
	I1108 10:19:09.204386  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key.8c8c918e: {Name:mkc76835c9dbccc76f600cb548d902fd50a911c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.204485  495950 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt.8c8c918e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt
	I1108 10:19:09.204572  495950 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key.8c8c918e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key
	I1108 10:19:09.204636  495950 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key
	I1108 10:19:09.204651  495950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.crt with IP's: []
	I1108 10:19:09.680868  495950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.crt ...
	I1108 10:19:09.680902  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.crt: {Name:mk323cb6751dc1ff3a806c67cc9bccc9c68fe643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.681106  495950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key ...
	I1108 10:19:09.681127  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key: {Name:mk9c188b2118a865ef4e0005c224ac1711767084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:09.681322  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:19:09.681365  495950 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:19:09.681380  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:19:09.681408  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:19:09.681436  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:19:09.681465  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:19:09.681512  495950 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:19:09.682949  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:19:09.702596  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:19:09.732770  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:19:09.755012  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:19:09.778909  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:19:09.799049  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:19:09.817430  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:19:09.835340  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:19:09.853098  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:19:09.874215  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:19:09.892686  495950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:19:09.911677  495950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:19:09.925702  495950 ssh_runner.go:195] Run: openssl version
	I1108 10:19:09.932140  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:19:09.943241  495950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:09.947053  495950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:09.947121  495950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:09.990898  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:19:09.999614  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:19:10.013369  495950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:19:10.018002  495950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:19:10.018075  495950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:19:10.060441  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:19:10.069413  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:19:10.079161  495950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:19:10.083576  495950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:19:10.083690  495950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:19:10.125344  495950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:19:10.133983  495950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:19:10.137803  495950 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:19:10.137856  495950 kubeadm.go:401] StartCluster: {Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:10.137931  495950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:19:10.138004  495950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:19:10.166808  495950 cri.go:89] found id: ""
	I1108 10:19:10.166943  495950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:19:10.180264  495950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:19:10.189126  495950 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:19:10.189195  495950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:19:10.197192  495950 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:19:10.197211  495950 kubeadm.go:158] found existing configuration files:
	
	I1108 10:19:10.197261  495950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:19:10.205187  495950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:19:10.205256  495950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:19:10.213088  495950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:19:10.221209  495950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:19:10.221330  495950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:19:10.228754  495950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:19:10.236596  495950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:19:10.236692  495950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:19:10.244193  495950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:19:10.253753  495950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:19:10.253820  495950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:19:10.261866  495950 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:19:10.302474  495950 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:19:10.302835  495950 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:19:10.328030  495950 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:19:10.328147  495950 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:19:10.328194  495950 kubeadm.go:319] OS: Linux
	I1108 10:19:10.328247  495950 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:19:10.328303  495950 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:19:10.328357  495950 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:19:10.328413  495950 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:19:10.328468  495950 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:19:10.328522  495950 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:19:10.328573  495950 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:19:10.328628  495950 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:19:10.328680  495950 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:19:10.399927  495950 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:19:10.400044  495950 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:19:10.400144  495950 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:19:10.410046  495950 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:19:10.417020  495950 out.go:252]   - Generating certificates and keys ...
	I1108 10:19:10.417114  495950 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:19:10.417188  495950 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:19:10.837381  495950 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:19:11.357690  495950 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:19:11.942023  495950 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:19:12.333687  495950 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:19:12.676548  495950 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:19:12.677108  495950 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-330758] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1108 10:19:09.444090  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:11.943941  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:13.219529  495950 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:19:13.219882  495950 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-330758] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:19:14.530219  495950 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:19:15.024896  495950 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:19:15.434925  495950 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:19:15.435194  495950 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:19:15.864625  495950 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:19:16.174010  495950 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:19:17.091047  495950 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:19:17.462410  495950 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:19:17.760157  495950 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:19:17.760801  495950 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:19:17.764273  495950 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:19:17.767677  495950 out.go:252]   - Booting up control plane ...
	I1108 10:19:17.767784  495950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:19:17.767871  495950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:19:17.767961  495950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:19:17.785638  495950 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:19:17.786013  495950 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:19:17.794477  495950 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:19:17.794875  495950 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:19:17.795242  495950 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:19:17.932066  495950 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:19:17.932206  495950 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1108 10:19:14.443677  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:16.945306  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:18.450459  495950 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 518.838235ms
	I1108 10:19:18.454443  495950 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:19:18.454549  495950 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:19:18.454661  495950 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:19:18.454748  495950 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:19:21.633021  495950 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.177375557s
	I1108 10:19:22.762483  495950 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.307997187s
	W1108 10:19:18.946036  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:21.443106  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:24.966520  495950 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.51196931s
	I1108 10:19:24.988298  495950 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:19:25.009268  495950 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:19:25.030187  495950 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:19:25.030507  495950 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-330758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:19:25.043372  495950 kubeadm.go:319] [bootstrap-token] Using token: qo63cc.lz97qx5d0ipeffkc
	I1108 10:19:25.046424  495950 out.go:252]   - Configuring RBAC rules ...
	I1108 10:19:25.046565  495950 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:19:25.055587  495950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:19:25.068371  495950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:19:25.073306  495950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:19:25.078218  495950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:19:25.084986  495950 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:19:25.374704  495950 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:19:25.833034  495950 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:19:26.374428  495950 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:19:26.375529  495950 kubeadm.go:319] 
	I1108 10:19:26.375608  495950 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:19:26.375618  495950 kubeadm.go:319] 
	I1108 10:19:26.375698  495950 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:19:26.375707  495950 kubeadm.go:319] 
	I1108 10:19:26.375734  495950 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:19:26.375808  495950 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:19:26.375865  495950 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:19:26.375875  495950 kubeadm.go:319] 
	I1108 10:19:26.375932  495950 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:19:26.375941  495950 kubeadm.go:319] 
	I1108 10:19:26.375991  495950 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:19:26.375999  495950 kubeadm.go:319] 
	I1108 10:19:26.376053  495950 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:19:26.376142  495950 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:19:26.376221  495950 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:19:26.376235  495950 kubeadm.go:319] 
	I1108 10:19:26.376324  495950 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:19:26.376414  495950 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:19:26.376423  495950 kubeadm.go:319] 
	I1108 10:19:26.376511  495950 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qo63cc.lz97qx5d0ipeffkc \
	I1108 10:19:26.376649  495950 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:19:26.376677  495950 kubeadm.go:319] 	--control-plane 
	I1108 10:19:26.376685  495950 kubeadm.go:319] 
	I1108 10:19:26.376774  495950 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:19:26.376782  495950 kubeadm.go:319] 
	I1108 10:19:26.376875  495950 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qo63cc.lz97qx5d0ipeffkc \
	I1108 10:19:26.377012  495950 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:19:26.381925  495950 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:19:26.382207  495950 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:19:26.382337  495950 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:19:26.382359  495950 cni.go:84] Creating CNI manager for ""
	I1108 10:19:26.382371  495950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:26.385506  495950 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:19:26.388635  495950 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:19:26.393397  495950 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:19:26.393415  495950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:19:26.408318  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:19:26.737436  495950 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:19:26.737581  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:26.737669  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-330758 minikube.k8s.io/updated_at=2025_11_08T10_19_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=newest-cni-330758 minikube.k8s.io/primary=true
	I1108 10:19:26.756411  495950 ops.go:34] apiserver oom_adj: -16
	I1108 10:19:26.903089  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:27.404137  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:27.903473  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1108 10:19:23.443142  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	W1108 10:19:25.943129  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:28.403954  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:28.903205  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:29.403481  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:29.903713  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:30.403230  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:30.904152  495950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:19:31.190421  495950 kubeadm.go:1114] duration metric: took 4.452890009s to wait for elevateKubeSystemPrivileges
	I1108 10:19:31.190447  495950 kubeadm.go:403] duration metric: took 21.05259471s to StartCluster
	I1108 10:19:31.190463  495950 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:31.190522  495950 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:31.191447  495950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:31.191641  495950 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:19:31.191724  495950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:19:31.191975  495950 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:31.192030  495950 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:19:31.192091  495950 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-330758"
	I1108 10:19:31.192106  495950 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-330758"
	I1108 10:19:31.192127  495950 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:31.192599  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:31.193083  495950 addons.go:70] Setting default-storageclass=true in profile "newest-cni-330758"
	I1108 10:19:31.193100  495950 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-330758"
	I1108 10:19:31.193363  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:31.197974  495950 out.go:179] * Verifying Kubernetes components...
	I1108 10:19:31.202805  495950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:31.241933  495950 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:19:31.245175  495950 addons.go:239] Setting addon default-storageclass=true in "newest-cni-330758"
	I1108 10:19:31.245217  495950 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:31.245657  495950 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:31.246749  495950 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:19:31.246777  495950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:19:31.246829  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:31.280725  495950 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:19:31.280746  495950 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:19:31.280820  495950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:31.295923  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:31.318993  495950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:31.571671  495950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:19:31.594666  495950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:19:31.682532  495950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:19:31.701652  495950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:19:32.230651  495950 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:19:32.511047  495950 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:19:32.511125  495950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:19:32.528044  495950 api_server.go:72] duration metric: took 1.336373568s to wait for apiserver process to appear ...
	I1108 10:19:32.528107  495950 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:19:32.528138  495950 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:19:32.532453  495950 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:19:32.536133  495950 addons.go:515] duration metric: took 1.34408839s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:19:32.538837  495950 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:19:32.539856  495950 api_server.go:141] control plane version: v1.34.1
	I1108 10:19:32.539881  495950 api_server.go:131] duration metric: took 11.754195ms to wait for apiserver health ...
	I1108 10:19:32.539892  495950 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:19:32.543342  495950 system_pods.go:59] 8 kube-system pods found
	I1108 10:19:32.543386  495950 system_pods.go:61] "coredns-66bc5c9577-4zq2p" [148b4a8d-04ba-4b85-ba4c-aa7ff04adeeb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:19:32.543393  495950 system_pods.go:61] "etcd-newest-cni-330758" [b16f4406-54aa-41c8-922d-4d459430fb85] Running
	I1108 10:19:32.543399  495950 system_pods.go:61] "kindnet-2cmcs" [c14e613a-b33c-4bde-9cd9-0bf775170ccf] Running
	I1108 10:19:32.543406  495950 system_pods.go:61] "kube-apiserver-newest-cni-330758" [67075241-8851-41d1-84f3-8e21d612ad3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:19:32.543419  495950 system_pods.go:61] "kube-controller-manager-newest-cni-330758" [0ba4c4f4-eb49-4069-8a07-04dedb66da92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:19:32.543427  495950 system_pods.go:61] "kube-proxy-hzls4" [c81513fd-e2c2-4e11-a842-c8ae0ceaed28] Running
	I1108 10:19:32.543435  495950 system_pods.go:61] "kube-scheduler-newest-cni-330758" [4530a83c-b97c-4e17-b43c-e3333e2c0ead] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:19:32.543448  495950 system_pods.go:61] "storage-provisioner" [de7cb71c-4551-4e2b-a71b-9fea74e783e2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:19:32.543456  495950 system_pods.go:74] duration metric: took 3.557523ms to wait for pod list to return data ...
	I1108 10:19:32.543469  495950 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:19:32.546213  495950 default_sa.go:45] found service account: "default"
	I1108 10:19:32.546238  495950 default_sa.go:55] duration metric: took 2.761881ms for default service account to be created ...
	I1108 10:19:32.546251  495950 kubeadm.go:587] duration metric: took 1.354587821s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:19:32.546268  495950 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:19:32.548955  495950 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:19:32.548992  495950 node_conditions.go:123] node cpu capacity is 2
	I1108 10:19:32.549006  495950 node_conditions.go:105] duration metric: took 2.733056ms to run NodePressure ...
	I1108 10:19:32.549019  495950 start.go:242] waiting for startup goroutines ...
	I1108 10:19:32.734690  495950 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-330758" context rescaled to 1 replicas
	I1108 10:19:32.734728  495950 start.go:247] waiting for cluster config update ...
	I1108 10:19:32.734740  495950 start.go:256] writing updated cluster config ...
	I1108 10:19:32.735054  495950 ssh_runner.go:195] Run: rm -f paused
	I1108 10:19:32.793626  495950 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:19:32.797238  495950 out.go:179] * Done! kubectl is now configured to use "newest-cni-330758" cluster and "default" namespace by default
	W1108 10:19:28.444583  491995 node_ready.go:57] node "default-k8s-diff-port-689864" has "Ready":"False" status (will retry)
	I1108 10:19:30.944671  491995 node_ready.go:49] node "default-k8s-diff-port-689864" is "Ready"
	I1108 10:19:30.944706  491995 node_ready.go:38] duration metric: took 41.505229693s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:19:30.944720  491995 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:19:30.944781  491995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:19:30.976649  491995 api_server.go:72] duration metric: took 42.874296454s to wait for apiserver process to appear ...
	I1108 10:19:30.976674  491995 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:19:30.976705  491995 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:19:30.989875  491995 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1108 10:19:30.991557  491995 api_server.go:141] control plane version: v1.34.1
	I1108 10:19:30.991596  491995 api_server.go:131] duration metric: took 14.91329ms to wait for apiserver health ...
	I1108 10:19:30.991608  491995 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:19:31.018452  491995 system_pods.go:59] 8 kube-system pods found
	I1108 10:19:31.018499  491995 system_pods.go:61] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:19:31.018507  491995 system_pods.go:61] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running
	I1108 10:19:31.018514  491995 system_pods.go:61] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:19:31.018520  491995 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running
	I1108 10:19:31.018525  491995 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running
	I1108 10:19:31.018530  491995 system_pods.go:61] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:19:31.018535  491995 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running
	I1108 10:19:31.018542  491995 system_pods.go:61] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:19:31.018553  491995 system_pods.go:74] duration metric: took 26.937068ms to wait for pod list to return data ...
	I1108 10:19:31.018571  491995 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:19:31.026911  491995 default_sa.go:45] found service account: "default"
	I1108 10:19:31.026945  491995 default_sa.go:55] duration metric: took 8.367579ms for default service account to be created ...
	I1108 10:19:31.026956  491995 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:19:31.154615  491995 system_pods.go:86] 8 kube-system pods found
	I1108 10:19:31.154661  491995 system_pods.go:89] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:19:31.154669  491995 system_pods.go:89] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running
	I1108 10:19:31.154676  491995 system_pods.go:89] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:19:31.154681  491995 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running
	I1108 10:19:31.154686  491995 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running
	I1108 10:19:31.154695  491995 system_pods.go:89] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:19:31.154699  491995 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running
	I1108 10:19:31.154705  491995 system_pods.go:89] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:19:31.154743  491995 retry.go:31] will retry after 220.390543ms: missing components: kube-dns
	I1108 10:19:31.379662  491995 system_pods.go:86] 8 kube-system pods found
	I1108 10:19:31.379700  491995 system_pods.go:89] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:19:31.379709  491995 system_pods.go:89] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running
	I1108 10:19:31.379716  491995 system_pods.go:89] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:19:31.379721  491995 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running
	I1108 10:19:31.379725  491995 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running
	I1108 10:19:31.379729  491995 system_pods.go:89] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:19:31.379733  491995 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running
	I1108 10:19:31.379739  491995 system_pods.go:89] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:19:31.379755  491995 retry.go:31] will retry after 268.973365ms: missing components: kube-dns
	I1108 10:19:31.662401  491995 system_pods.go:86] 8 kube-system pods found
	I1108 10:19:31.662435  491995 system_pods.go:89] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Running
	I1108 10:19:31.662442  491995 system_pods.go:89] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running
	I1108 10:19:31.662447  491995 system_pods.go:89] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:19:31.662452  491995 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running
	I1108 10:19:31.662456  491995 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running
	I1108 10:19:31.662461  491995 system_pods.go:89] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:19:31.662465  491995 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running
	I1108 10:19:31.662469  491995 system_pods.go:89] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Running
	I1108 10:19:31.662477  491995 system_pods.go:126] duration metric: took 635.514935ms to wait for k8s-apps to be running ...
	I1108 10:19:31.662488  491995 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:19:31.662540  491995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:19:31.685430  491995 system_svc.go:56] duration metric: took 22.933042ms WaitForService to wait for kubelet
	I1108 10:19:31.685460  491995 kubeadm.go:587] duration metric: took 43.583112837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:19:31.685478  491995 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:19:31.689852  491995 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:19:31.689895  491995 node_conditions.go:123] node cpu capacity is 2
	I1108 10:19:31.689910  491995 node_conditions.go:105] duration metric: took 4.426208ms to run NodePressure ...
	I1108 10:19:31.689931  491995 start.go:242] waiting for startup goroutines ...
	I1108 10:19:31.689938  491995 start.go:247] waiting for cluster config update ...
	I1108 10:19:31.689955  491995 start.go:256] writing updated cluster config ...
	I1108 10:19:31.690284  491995 ssh_runner.go:195] Run: rm -f paused
	I1108 10:19:31.697670  491995 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:19:31.704208  491995 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.714344  491995 pod_ready.go:94] pod "coredns-66bc5c9577-5nhxx" is "Ready"
	I1108 10:19:31.714418  491995 pod_ready.go:86] duration metric: took 10.187206ms for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.718703  491995 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.728345  491995 pod_ready.go:94] pod "etcd-default-k8s-diff-port-689864" is "Ready"
	I1108 10:19:31.728367  491995 pod_ready.go:86] duration metric: took 9.592018ms for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.731869  491995 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.738045  491995 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-689864" is "Ready"
	I1108 10:19:31.738071  491995 pod_ready.go:86] duration metric: took 6.176362ms for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:31.740684  491995 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:32.102349  491995 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-689864" is "Ready"
	I1108 10:19:32.102378  491995 pod_ready.go:86] duration metric: took 361.666083ms for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:32.302982  491995 pod_ready.go:83] waiting for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:32.701633  491995 pod_ready.go:94] pod "kube-proxy-lcscg" is "Ready"
	I1108 10:19:32.701661  491995 pod_ready.go:86] duration metric: took 398.649525ms for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:32.902049  491995 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:33.303222  491995 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-689864" is "Ready"
	I1108 10:19:33.303246  491995 pod_ready.go:86] duration metric: took 401.172751ms for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:19:33.303258  491995 pod_ready.go:40] duration metric: took 1.605557622s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:19:33.376320  491995 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:19:33.379471  491995 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-689864" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.062337727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.07098466Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3526a161-65df-49ef-9ace-79c802c30c99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.079553382Z" level=info msg="Ran pod sandbox ac633cdb5c385f9ca88855f3911dc754c0361091f8b922bbc1f89529b25f6014 with infra container: kube-system/kube-proxy-hzls4/POD" id=3526a161-65df-49ef-9ace-79c802c30c99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.089181067Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=14ee3858-1722-417a-9a00-d3358ae9736e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.097366695Z" level=info msg="Running pod sandbox: kube-system/kindnet-2cmcs/POD" id=e1de7081-698f-4ff8-ac50-3ae03ed57fe3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.105793245Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=25df64fb-b149-438b-8cbb-15c5a54a2349 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.105857599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.128882556Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e1de7081-698f-4ff8-ac50-3ae03ed57fe3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.129403619Z" level=info msg="Creating container: kube-system/kube-proxy-hzls4/kube-proxy" id=b5a9abf7-bf22-4344-a10a-e346c8fca02e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.12949907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.13210387Z" level=info msg="Ran pod sandbox 3f8297ace233090180904490ed5c4e1f77a17a45831ec06ca185da6987fe354e with infra container: kube-system/kindnet-2cmcs/POD" id=e1de7081-698f-4ff8-ac50-3ae03ed57fe3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.163654685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.16445009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.164742556Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0d435331-0c90-44be-ab0a-133e79b57967 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.172290205Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1714b019-1f07-4560-9ae6-b08044926980 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.182886769Z" level=info msg="Creating container: kube-system/kindnet-2cmcs/kindnet-cni" id=c7ea295c-1f99-4d42-9323-de63d4942539 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.183200668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.20063422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.213544472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.269703701Z" level=info msg="Created container ecc0b80589e5dd2bd15647c1e429f4bc4e824da10f18cca882d334f28008ac2d: kube-system/kube-proxy-hzls4/kube-proxy" id=b5a9abf7-bf22-4344-a10a-e346c8fca02e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.270632719Z" level=info msg="Starting container: ecc0b80589e5dd2bd15647c1e429f4bc4e824da10f18cca882d334f28008ac2d" id=d4fa158a-deeb-4cf6-9ab4-07470e502b5c name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.273993292Z" level=info msg="Started container" PID=1408 containerID=ecc0b80589e5dd2bd15647c1e429f4bc4e824da10f18cca882d334f28008ac2d description=kube-system/kube-proxy-hzls4/kube-proxy id=d4fa158a-deeb-4cf6-9ab4-07470e502b5c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac633cdb5c385f9ca88855f3911dc754c0361091f8b922bbc1f89529b25f6014
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.325780951Z" level=info msg="Created container 4325ea809f8c6ddf314add5797311c027638c06c111f497f3fef532490a3d0f1: kube-system/kindnet-2cmcs/kindnet-cni" id=c7ea295c-1f99-4d42-9323-de63d4942539 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.330891539Z" level=info msg="Starting container: 4325ea809f8c6ddf314add5797311c027638c06c111f497f3fef532490a3d0f1" id=4ef375b9-989d-45e8-aa07-d105f1e1adaa name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:31 newest-cni-330758 crio[839]: time="2025-11-08T10:19:31.353337111Z" level=info msg="Started container" PID=1415 containerID=4325ea809f8c6ddf314add5797311c027638c06c111f497f3fef532490a3d0f1 description=kube-system/kindnet-2cmcs/kindnet-cni id=4ef375b9-989d-45e8-aa07-d105f1e1adaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f8297ace233090180904490ed5c4e1f77a17a45831ec06ca185da6987fe354e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4325ea809f8c6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   3f8297ace2330       kindnet-2cmcs                               kube-system
	ecc0b80589e5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   ac633cdb5c385       kube-proxy-hzls4                            kube-system
	89f081d468a9d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   d6ad005b92ef0       kube-scheduler-newest-cni-330758            kube-system
	ec326c625098d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   860e2327c3fb0       kube-controller-manager-newest-cni-330758   kube-system
	7251c957c6348       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   ee08730818a7c       kube-apiserver-newest-cni-330758            kube-system
	8e60645b5da58       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   f2f55e9d20fde       etcd-newest-cni-330758                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-330758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-330758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-330758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_19_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:19:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-330758
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:19:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:19:26 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:19:26 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:19:26 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:19:26 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-330758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                5853ff61-bbc9-4baf-94c5-07acd84b90c2
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-330758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-2cmcs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-330758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-330758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-proxy-hzls4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-330758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-330758 event: Registered Node newest-cni-330758 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8e60645b5da5861d530b691fa8fed18e7a91a95e201091a4c3bf8f4de4c61ca9] <==
	{"level":"warn","ts":"2025-11-08T10:19:21.177926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.201859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.245077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.275987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.289253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.332003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.381062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.399391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.443253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.497527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.533266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.550715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.584410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.614139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.634080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.659209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.670786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.688953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.711309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.729714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.747021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.774380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.793500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.817828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:21.903220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:34 up  3:02,  0 user,  load average: 3.41, 3.74, 2.89
	Linux newest-cni-330758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4325ea809f8c6ddf314add5797311c027638c06c111f497f3fef532490a3d0f1] <==
	I1108 10:19:31.427499       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:19:31.427726       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:19:31.427855       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:19:31.427866       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:19:31.427878       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:19:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:19:31.635020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:19:31.635037       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:19:31.635045       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:19:31.635731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7251c957c6348f4e848163fd81ba20f375aa703e52c94f0a0e03aa6f1a93d28c] <==
	I1108 10:19:22.815503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:19:22.831042       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:19:22.850113       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:19:22.850440       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:19:22.850630       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:19:22.853413       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:19:22.858942       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:19:22.859932       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:19:23.527817       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:19:23.535694       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:19:23.535725       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:19:24.448443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:19:24.506591       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:19:24.609220       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:19:24.621821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 10:19:24.623108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:19:24.630061       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:19:24.712458       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:19:25.803535       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:19:25.831231       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:19:25.844824       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:19:30.676219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:19:30.685767       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:19:30.715875       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 10:19:30.789293       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ec326c625098d5ceef369a2bd69fca5b2b90041dad24cffd1072435191f3177e] <==
	I1108 10:19:29.760648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 10:19:29.760762       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:19:29.760952       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:19:29.761007       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:19:29.761012       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:19:29.761068       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:19:29.761139       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-330758"
	I1108 10:19:29.761177       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 10:19:29.761366       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:19:29.762590       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:19:29.765452       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:19:29.765639       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:19:29.766698       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:19:29.775015       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:19:29.785183       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:19:29.802849       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:19:29.809046       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:19:29.809058       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:19:29.809093       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:19:29.809100       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:19:29.811287       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:19:29.811375       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:19:29.811420       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:19:29.811478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:19:29.811651       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [ecc0b80589e5dd2bd15647c1e429f4bc4e824da10f18cca882d334f28008ac2d] <==
	I1108 10:19:31.598597       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:19:31.746748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:19:31.848544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:19:31.848583       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:19:31.848650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:19:31.931860       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:19:31.931924       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:19:31.946852       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:19:31.947559       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:19:31.947580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:19:31.952352       1 config.go:200] "Starting service config controller"
	I1108 10:19:31.977829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:19:31.977507       1 config.go:309] "Starting node config controller"
	I1108 10:19:31.977874       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:19:31.977888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:19:31.981483       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:19:31.981508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:19:31.981535       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:19:31.981540       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:19:32.078677       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:19:32.081937       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:19:32.081978       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [89f081d468a9d3bf97f42cfe0d561167fbac37b18f787ceaccc44d55d14d19de] <==
	E1108 10:19:22.769508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:19:22.769607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:19:22.769694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:19:22.769766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:19:22.769859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:19:22.769947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:19:22.770028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:19:22.770107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:19:22.770195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:19:23.608985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:19:23.668162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:19:23.679900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:19:23.690810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:19:23.710377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:19:23.866915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 10:19:23.907897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:19:23.922085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:19:23.938175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:19:23.974385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:19:24.020552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:19:24.022536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:19:24.090712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:19:24.141539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:19:24.147475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1108 10:19:25.734092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.034417    1297 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.034532    1297 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.718309    1297 apiserver.go:52] "Watching apiserver"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.753333    1297 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.868174    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.868064    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: E1108 10:19:26.891838    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-330758\" already exists" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: E1108 10:19:26.904579    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-330758\" already exists" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.929504    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-330758" podStartSLOduration=1.929484999 podStartE2EDuration="1.929484999s" podCreationTimestamp="2025-11-08 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:26.906531494 +0000 UTC m=+1.288834015" watchObservedRunningTime="2025-11-08 10:19:26.929484999 +0000 UTC m=+1.311787512"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.952842    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-330758" podStartSLOduration=1.952778422 podStartE2EDuration="1.952778422s" podCreationTimestamp="2025-11-08 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:26.931655646 +0000 UTC m=+1.313958167" watchObservedRunningTime="2025-11-08 10:19:26.952778422 +0000 UTC m=+1.335080927"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.978692    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-330758" podStartSLOduration=1.978672345 podStartE2EDuration="1.978672345s" podCreationTimestamp="2025-11-08 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:26.963893218 +0000 UTC m=+1.346195731" watchObservedRunningTime="2025-11-08 10:19:26.978672345 +0000 UTC m=+1.360974850"
	Nov 08 10:19:26 newest-cni-330758 kubelet[1297]: I1108 10:19:26.992004    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-330758" podStartSLOduration=3.991986604 podStartE2EDuration="3.991986604s" podCreationTimestamp="2025-11-08 10:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:26.979130655 +0000 UTC m=+1.361433168" watchObservedRunningTime="2025-11-08 10:19:26.991986604 +0000 UTC m=+1.374289109"
	Nov 08 10:19:29 newest-cni-330758 kubelet[1297]: I1108 10:19:29.834390    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:19:29 newest-cni-330758 kubelet[1297]: I1108 10:19:29.835546    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804213    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-kube-proxy\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804467    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-xtables-lock\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804559    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsc6c\" (UniqueName: \"kubernetes.io/projected/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-kube-api-access-rsc6c\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804637    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-cni-cfg\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804712    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95x4f\" (UniqueName: \"kubernetes.io/projected/c14e613a-b33c-4bde-9cd9-0bf775170ccf-kube-api-access-95x4f\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804786    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-xtables-lock\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804875    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-lib-modules\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:30 newest-cni-330758 kubelet[1297]: I1108 10:19:30.804992    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-lib-modules\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:31 newest-cni-330758 kubelet[1297]: I1108 10:19:31.015154    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:19:31 newest-cni-330758 kubelet[1297]: I1108 10:19:31.910685    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hzls4" podStartSLOduration=1.910668772 podStartE2EDuration="1.910668772s" podCreationTimestamp="2025-11-08 10:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:31.910218347 +0000 UTC m=+6.292520860" watchObservedRunningTime="2025-11-08 10:19:31.910668772 +0000 UTC m=+6.292971277"
	Nov 08 10:19:33 newest-cni-330758 kubelet[1297]: I1108 10:19:33.096530    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2cmcs" podStartSLOduration=3.096510716 podStartE2EDuration="3.096510716s" podCreationTimestamp="2025-11-08 10:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:31.970293741 +0000 UTC m=+6.352596262" watchObservedRunningTime="2025-11-08 10:19:33.096510716 +0000 UTC m=+7.478813221"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-330758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4zq2p storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner: exit status 1 (83.627661ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4zq2p" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (354.117088ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-689864 describe deploy/metrics-server -n kube-system: exit status 1 (111.422621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-689864 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-689864
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-689864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	        "Created": "2025-11-08T10:18:18.537571387Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:18:18.61573683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hostname",
	        "HostsPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hosts",
	        "LogPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f-json.log",
	        "Name": "/default-k8s-diff-port-689864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-689864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-689864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	                "LowerDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-689864",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-689864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-689864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "114aa1546d9edce2b71a1b9bcaac43b55ddacae9ad1fa54729d0ea26bc116a2d",
	            "SandboxKey": "/var/run/docker/netns/114aa1546d9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-689864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:76:62:17:65:e7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d632f4190a5769bf708ccc9b7017dc54cf240a895d92fa0248d238a968a6188d",
	                    "EndpointID": "eb5eb8b7acdfec33e39dc73d455d468186ccdfe1827dac96be096489debc16fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-689864",
	                        "48dfdc9a3efb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25: (1.584793975s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:15 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-872727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-872727 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:16 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p newest-cni-330758 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:19:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:19:36.998130  499131 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:19:36.998320  499131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:36.998352  499131 out.go:374] Setting ErrFile to fd 2...
	I1108 10:19:36.998373  499131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:36.998654  499131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:19:36.999080  499131 out.go:368] Setting JSON to false
	I1108 10:19:37.000213  499131 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10926,"bootTime":1762586251,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:19:37.000352  499131 start.go:143] virtualization:  
	I1108 10:19:37.004269  499131 out.go:179] * [newest-cni-330758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:19:37.008250  499131 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:19:37.008343  499131 notify.go:221] Checking for updates...
	I1108 10:19:37.014443  499131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:19:37.017715  499131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:37.020980  499131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:19:37.024259  499131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:19:37.027365  499131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:19:37.030792  499131 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:37.031385  499131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:19:37.059300  499131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:19:37.059419  499131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:37.122642  499131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:19:37.112943602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:37.122750  499131 docker.go:319] overlay module found
	I1108 10:19:37.126169  499131 out.go:179] * Using the docker driver based on existing profile
	I1108 10:19:37.129016  499131 start.go:309] selected driver: docker
	I1108 10:19:37.129036  499131 start.go:930] validating driver "docker" against &{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:37.129148  499131 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:19:37.129853  499131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:37.186429  499131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:19:37.176179061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:37.186758  499131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:19:37.186794  499131 cni.go:84] Creating CNI manager for ""
	I1108 10:19:37.186850  499131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:37.186891  499131 start.go:353] cluster config:
	{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:37.191917  499131 out.go:179] * Starting "newest-cni-330758" primary control-plane node in "newest-cni-330758" cluster
	I1108 10:19:37.194808  499131 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:19:37.197683  499131 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:19:37.200532  499131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:19:37.200591  499131 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:19:37.200604  499131 cache.go:59] Caching tarball of preloaded images
	I1108 10:19:37.200641  499131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:19:37.200701  499131 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:19:37.200711  499131 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:19:37.200835  499131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:19:37.219994  499131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:19:37.220018  499131 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:19:37.220036  499131 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:19:37.220059  499131 start.go:360] acquireMachinesLock for newest-cni-330758: {Name:mka68247f3ee22af15ad7dc6cf73067d1036d0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:19:37.220132  499131 start.go:364] duration metric: took 46.048µs to acquireMachinesLock for "newest-cni-330758"
	I1108 10:19:37.220155  499131 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:19:37.220162  499131 fix.go:54] fixHost starting: 
	I1108 10:19:37.220419  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:37.238233  499131 fix.go:112] recreateIfNeeded on newest-cni-330758: state=Stopped err=<nil>
	W1108 10:19:37.238267  499131 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:19:37.241635  499131 out.go:252] * Restarting existing docker container for "newest-cni-330758" ...
	I1108 10:19:37.241727  499131 cli_runner.go:164] Run: docker start newest-cni-330758
	I1108 10:19:37.499830  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:37.519292  499131 kic.go:430] container "newest-cni-330758" state is running.
	I1108 10:19:37.519708  499131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:37.539601  499131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:19:37.539967  499131 machine.go:94] provisionDockerMachine start ...
	I1108 10:19:37.540055  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:37.561612  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:37.562003  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:37.562022  499131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:19:37.562675  499131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:19:40.712831  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:40.712854  499131 ubuntu.go:182] provisioning hostname "newest-cni-330758"
	I1108 10:19:40.712969  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:40.731890  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:40.732220  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:40.732237  499131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-330758 && echo "newest-cni-330758" | sudo tee /etc/hostname
	I1108 10:19:40.898060  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:40.898151  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:40.915283  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:40.915597  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:40.915619  499131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-330758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-330758/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-330758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:19:41.066019  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:19:41.066056  499131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:19:41.066080  499131 ubuntu.go:190] setting up certificates
	I1108 10:19:41.066091  499131 provision.go:84] configureAuth start
	I1108 10:19:41.066154  499131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:41.083513  499131 provision.go:143] copyHostCerts
	I1108 10:19:41.083586  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:19:41.083600  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:19:41.083681  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:19:41.083792  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:19:41.083802  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:19:41.083830  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:19:41.083902  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:19:41.083912  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:19:41.083936  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:19:41.084043  499131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.newest-cni-330758 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-330758]
	I1108 10:19:41.293780  499131 provision.go:177] copyRemoteCerts
	I1108 10:19:41.293875  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:19:41.293937  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.312070  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:41.420665  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:19:41.437797  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:19:41.456610  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:19:41.476538  499131 provision.go:87] duration metric: took 410.41973ms to configureAuth
	I1108 10:19:41.476565  499131 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:19:41.476771  499131 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:41.476896  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.494178  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:41.494622  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:41.494644  499131 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:19:41.812050  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:19:41.812071  499131 machine.go:97] duration metric: took 4.27209174s to provisionDockerMachine
	I1108 10:19:41.812082  499131 start.go:293] postStartSetup for "newest-cni-330758" (driver="docker")
	I1108 10:19:41.812094  499131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:19:41.812181  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:19:41.812221  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.851190  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:41.975472  499131 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:19:41.985706  499131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:19:41.985739  499131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:19:41.985750  499131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:19:41.985807  499131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:19:41.985885  499131 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:19:41.985990  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:19:41.996314  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	
	
	==> CRI-O <==
	Nov 08 10:19:31 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:30.994327981Z" level=info msg="Created container 8365b7ec347810245a2a628eccc7cfbc758ecccda09c39e0454e126fd745eb70: kube-system/coredns-66bc5c9577-5nhxx/coredns" id=18912a52-19d7-4e90-972a-a85924404fda name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:31 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:31.008402101Z" level=info msg="Starting container: 8365b7ec347810245a2a628eccc7cfbc758ecccda09c39e0454e126fd745eb70" id=99b788ed-5cdf-4e77-b2e9-dd1b93b7eda9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:31 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:31.010502428Z" level=info msg="Started container" PID=1750 containerID=8365b7ec347810245a2a628eccc7cfbc758ecccda09c39e0454e126fd745eb70 description=kube-system/coredns-66bc5c9577-5nhxx/coredns id=99b788ed-5cdf-4e77-b2e9-dd1b93b7eda9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d8f8462e8c935c2b5fb56e6e7b2573cf1381071781fecbda8c29e668312e1a4
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.04290742Z" level=info msg="Running pod sandbox: default/busybox/POD" id=da2cda93-0807-4ba3-8946-d7d4b7cf234d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.042977796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.050419221Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85 UID:78e08397-121e-44c5-9cc0-d303ab0890eb NetNS:/var/run/netns/3d73c98b-48d4-408a-b480-6def3fbbad61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400028d088}] Aliases:map[]}"
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.050456374Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.06790472Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85 UID:78e08397-121e-44c5-9cc0-d303ab0890eb NetNS:/var/run/netns/3d73c98b-48d4-408a-b480-6def3fbbad61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400028d088}] Aliases:map[]}"
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.068260432Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.076685136Z" level=info msg="Ran pod sandbox 3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85 with infra container: default/busybox/POD" id=da2cda93-0807-4ba3-8946-d7d4b7cf234d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.078626505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3242cc88-4e74-4e6a-a87f-bb4e33ba3217 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.078848677Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3242cc88-4e74-4e6a-a87f-bb4e33ba3217 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.078955246Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3242cc88-4e74-4e6a-a87f-bb4e33ba3217 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.081854687Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b807e0e0-a136-4d29-8443-49ed227bfdfa name=/runtime.v1.ImageService/PullImage
	Nov 08 10:19:34 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:34.087866133Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.057427545Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b807e0e0-a136-4d29-8443-49ed227bfdfa name=/runtime.v1.ImageService/PullImage
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.058162288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=261b6136-7428-42af-a6dd-1b1006bce220 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.060064436Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9844402a-bc2c-4b25-802a-e386059b7de1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.067330236Z" level=info msg="Creating container: default/busybox/busybox" id=22b51a58-0f84-46ce-939e-48b3171031b9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.067458435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.073314695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.074148804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.109831729Z" level=info msg="Created container c4bb112a4741da74f6d487bd1171e52779c5595dc5b40c2ed6cf9960cdeec7e2: default/busybox/busybox" id=22b51a58-0f84-46ce-939e-48b3171031b9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.11160873Z" level=info msg="Starting container: c4bb112a4741da74f6d487bd1171e52779c5595dc5b40c2ed6cf9960cdeec7e2" id=9177ce53-0553-40ea-9484-6e624ce27398 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:36 default-k8s-diff-port-689864 crio[838]: time="2025-11-08T10:19:36.118019753Z" level=info msg="Started container" PID=1808 containerID=c4bb112a4741da74f6d487bd1171e52779c5595dc5b40c2ed6cf9960cdeec7e2 description=default/busybox/busybox id=9177ce53-0553-40ea-9484-6e624ce27398 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c4bb112a4741d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   3415a2311a076       busybox                                                default
	8365b7ec34781       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   4d8f8462e8c93       coredns-66bc5c9577-5nhxx                               kube-system
	668596a67d103       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   60535a740e839       storage-provisioner                                    kube-system
	b8b4569a1bab1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   583baa6aa67f3       kube-proxy-lcscg                                       kube-system
	d5615952e8710       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   8cd735cba9806       kindnet-c98xc                                          kube-system
	8478e49dbf2cc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   fe7e928ae6f5c       kube-apiserver-default-k8s-diff-port-689864            kube-system
	123ffaf4252b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   20a95e4196397       kube-scheduler-default-k8s-diff-port-689864            kube-system
	226488a01278b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   cf1224bfbbd84       kube-controller-manager-default-k8s-diff-port-689864   kube-system
	b68db9b85b200       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   4f12ece14f21f       etcd-default-k8s-diff-port-689864                      kube-system
	
	
	==> coredns [8365b7ec347810245a2a628eccc7cfbc758ecccda09c39e0454e126fd745eb70] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54045 - 11981 "HINFO IN 2874781958245297065.8683379795376884662. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024005036s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-689864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-689864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-689864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_18_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:18:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-689864
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:19:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:19:34 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:19:34 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:19:34 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:19:34 +0000   Sat, 08 Nov 2025 10:19:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-689864
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                374121ba-37fd-4356-a88f-beebc6e065b5
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-5nhxx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-689864                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-c98xc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-689864             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-689864    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-lcscg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-689864             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 60s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node default-k8s-diff-port-689864 event: Registered Node default-k8s-diff-port-689864 in Controller
	  Normal   NodeReady                13s   kubelet          Node default-k8s-diff-port-689864 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 09:55] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b68db9b85b2005a2e96b3cbfa84a675c8e885f1d7736a4d879e15c13ae45fb7c] <==
	{"level":"warn","ts":"2025-11-08T10:18:39.142478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.161230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.189389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.199248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.222463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.239995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.257181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.273874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.290658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.314919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.324595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.342226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.365994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.382467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.400737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.413739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.431023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.448742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.474471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.521667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.545383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.568249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.587804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.602514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:18:39.668056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45050","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:44 up  3:02,  0 user,  load average: 3.18, 3.68, 2.88
	Linux default-k8s-diff-port-689864 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5615952e8710e72eef1fe5ceeea61b5103a2cee935baa7121740e8031f9f6bd] <==
	I1108 10:18:49.826237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:18:49.827192       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:18:49.827383       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:18:49.827425       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:18:49.827466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:18:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:18:50.114784       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:18:50.114819       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:18:50.114830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:18:50.115150       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:19:20.114839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:19:20.114996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:19:20.115083       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:19:20.115783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:19:21.715785       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:19:21.715884       1 metrics.go:72] Registering metrics
	I1108 10:19:21.715974       1 controller.go:711] "Syncing nftables rules"
	I1108 10:19:30.121431       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:19:30.121479       1 main.go:301] handling current node
	I1108 10:19:40.115406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:19:40.115440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8478e49dbf2cc621bb68a606a63d3d29f14ef619566c717486e67f005fb352c8] <==
	I1108 10:18:40.585717       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1108 10:18:40.591986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:18:40.602545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:18:40.602769       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:18:40.605958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:18:40.628163       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:18:40.629866       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:18:41.283115       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:18:41.288634       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:18:41.288656       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:18:42.029961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:18:42.091773       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:18:42.203112       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:18:42.219506       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:18:42.221063       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:18:42.232256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:18:42.470393       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:18:43.261401       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:18:43.283090       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:18:43.293628       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:18:47.774021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:18:47.778940       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:18:48.319076       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:18:48.643356       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1108 10:19:41.925431       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:52368: use of closed network connection
	
	
	==> kube-controller-manager [226488a01278b0955c502bf6f78b057918afd1513cb28adcafa258f49e3a6ac2] <==
	I1108 10:18:47.492459       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:18:47.495891       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:18:47.507539       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:18:47.512364       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:18:47.512491       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:18:47.513477       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:18:47.513581       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:18:47.513722       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:18:47.513764       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:18:47.513871       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:18:47.513943       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:18:47.514090       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:18:47.514074       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:18:47.514204       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:18:47.514304       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:18:47.514428       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:18:47.515061       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:18:47.516184       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:18:47.516248       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:18:47.528545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:18:47.528631       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:18:47.528661       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:18:47.563372       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:18:47.563747       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:19:32.488518       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8b4569a1bab13321225b351c0ce317e159ddc2002ca71e8caeba19e51823016] <==
	I1108 10:18:49.800795       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:18:49.905029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:18:50.007921       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:18:50.008113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:18:50.008247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:18:50.041802       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:18:50.041856       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:18:50.046762       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:18:50.047099       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:18:50.047124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:18:50.048271       1 config.go:200] "Starting service config controller"
	I1108 10:18:50.048292       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:18:50.052082       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:18:50.052165       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:18:50.052194       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:18:50.052199       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:18:50.052826       1 config.go:309] "Starting node config controller"
	I1108 10:18:50.052843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:18:50.052852       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:18:50.148450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:18:50.152729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:18:50.152775       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [123ffaf4252b0b3c0e01a623f1c5530c8584366d9ce986aea0a8299b2b72d588] <==
	E1108 10:18:40.539559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:18:40.539748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:18:40.539856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:18:40.540393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:18:40.540470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:18:40.540535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:18:40.540591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:18:40.546144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:18:40.546335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:18:40.546451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:18:40.546580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:18:40.546673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:18:40.546791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:18:40.546793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:18:40.546846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:18:41.358597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:18:41.370055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:18:41.450223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:18:41.481687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:18:41.489114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:18:41.500555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:18:41.583615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:18:41.737197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:18:41.861068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:18:44.206258       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:18:44 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:44.477710    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-689864" podStartSLOduration=1.4776920119999999 podStartE2EDuration="1.477692012s" podCreationTimestamp="2025-11-08 10:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:18:44.396059247 +0000 UTC m=+1.266615326" watchObservedRunningTime="2025-11-08 10:18:44.477692012 +0000 UTC m=+1.348248091"
	Nov 08 10:18:44 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:44.521751    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-689864" podStartSLOduration=3.521732719 podStartE2EDuration="3.521732719s" podCreationTimestamp="2025-11-08 10:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:18:44.479189199 +0000 UTC m=+1.349745286" watchObservedRunningTime="2025-11-08 10:18:44.521732719 +0000 UTC m=+1.392288798"
	Nov 08 10:18:47 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:47.537552    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 10:18:47 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:47.538753    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.049585    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/adc3d88d-8c83-4dab-958c-42c33e6f43f3-cni-cfg\") pod \"kindnet-c98xc\" (UID: \"adc3d88d-8c83-4dab-958c-42c33e6f43f3\") " pod="kube-system/kindnet-c98xc"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052016    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/096de2a8-f856-4f6c-ac17-c3e8f292ac77-kube-proxy\") pod \"kube-proxy-lcscg\" (UID: \"096de2a8-f856-4f6c-ac17-c3e8f292ac77\") " pod="kube-system/kube-proxy-lcscg"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052124    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adc3d88d-8c83-4dab-958c-42c33e6f43f3-lib-modules\") pod \"kindnet-c98xc\" (UID: \"adc3d88d-8c83-4dab-958c-42c33e6f43f3\") " pod="kube-system/kindnet-c98xc"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052173    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/096de2a8-f856-4f6c-ac17-c3e8f292ac77-xtables-lock\") pod \"kube-proxy-lcscg\" (UID: \"096de2a8-f856-4f6c-ac17-c3e8f292ac77\") " pod="kube-system/kube-proxy-lcscg"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052195    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fg6q\" (UniqueName: \"kubernetes.io/projected/096de2a8-f856-4f6c-ac17-c3e8f292ac77-kube-api-access-5fg6q\") pod \"kube-proxy-lcscg\" (UID: \"096de2a8-f856-4f6c-ac17-c3e8f292ac77\") " pod="kube-system/kube-proxy-lcscg"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052246    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adc3d88d-8c83-4dab-958c-42c33e6f43f3-xtables-lock\") pod \"kindnet-c98xc\" (UID: \"adc3d88d-8c83-4dab-958c-42c33e6f43f3\") " pod="kube-system/kindnet-c98xc"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052270    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/096de2a8-f856-4f6c-ac17-c3e8f292ac77-lib-modules\") pod \"kube-proxy-lcscg\" (UID: \"096de2a8-f856-4f6c-ac17-c3e8f292ac77\") " pod="kube-system/kube-proxy-lcscg"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.052313    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhrw\" (UniqueName: \"kubernetes.io/projected/adc3d88d-8c83-4dab-958c-42c33e6f43f3-kube-api-access-jnhrw\") pod \"kindnet-c98xc\" (UID: \"adc3d88d-8c83-4dab-958c-42c33e6f43f3\") " pod="kube-system/kindnet-c98xc"
	Nov 08 10:18:49 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:49.197807    1327 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:18:50 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:50.459842    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c98xc" podStartSLOduration=2.459823557 podStartE2EDuration="2.459823557s" podCreationTimestamp="2025-11-08 10:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:18:50.431475226 +0000 UTC m=+7.302031313" watchObservedRunningTime="2025-11-08 10:18:50.459823557 +0000 UTC m=+7.330379652"
	Nov 08 10:18:53 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:18:53.357471    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcscg" podStartSLOduration=5.357453412 podStartE2EDuration="5.357453412s" podCreationTimestamp="2025-11-08 10:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:18:50.462656995 +0000 UTC m=+7.333213082" watchObservedRunningTime="2025-11-08 10:18:53.357453412 +0000 UTC m=+10.228009499"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:30.483405    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:30.558796    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae48e4e7-48a3-4cc4-be6f-1102abd83f25-config-volume\") pod \"coredns-66bc5c9577-5nhxx\" (UID: \"ae48e4e7-48a3-4cc4-be6f-1102abd83f25\") " pod="kube-system/coredns-66bc5c9577-5nhxx"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:30.559034    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6msb\" (UniqueName: \"kubernetes.io/projected/ae48e4e7-48a3-4cc4-be6f-1102abd83f25-kube-api-access-n6msb\") pod \"coredns-66bc5c9577-5nhxx\" (UID: \"ae48e4e7-48a3-4cc4-be6f-1102abd83f25\") " pod="kube-system/coredns-66bc5c9577-5nhxx"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:30.559127    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvs4f\" (UniqueName: \"kubernetes.io/projected/5a04d7b1-40e4-474f-acab-716d8e5e70de-kube-api-access-cvs4f\") pod \"storage-provisioner\" (UID: \"5a04d7b1-40e4-474f-acab-716d8e5e70de\") " pod="kube-system/storage-provisioner"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:30.559245    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5a04d7b1-40e4-474f-acab-716d8e5e70de-tmp\") pod \"storage-provisioner\" (UID: \"5a04d7b1-40e4-474f-acab-716d8e5e70de\") " pod="kube-system/storage-provisioner"
	Nov 08 10:19:30 default-k8s-diff-port-689864 kubelet[1327]: W1108 10:19:30.896779    1327 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-4d8f8462e8c935c2b5fb56e6e7b2573cf1381071781fecbda8c29e668312e1a4 WatchSource:0}: Error finding container 4d8f8462e8c935c2b5fb56e6e7b2573cf1381071781fecbda8c29e668312e1a4: Status 404 returned error can't find the container with id 4d8f8462e8c935c2b5fb56e6e7b2573cf1381071781fecbda8c29e668312e1a4
	Nov 08 10:19:31 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:31.553324    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.55330395 podStartE2EDuration="42.55330395s" podCreationTimestamp="2025-11-08 10:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:31.550905781 +0000 UTC m=+48.421461868" watchObservedRunningTime="2025-11-08 10:19:31.55330395 +0000 UTC m=+48.423860037"
	Nov 08 10:19:31 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:31.553486    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5nhxx" podStartSLOduration=43.553479484 podStartE2EDuration="43.553479484s" podCreationTimestamp="2025-11-08 10:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:19:31.52904278 +0000 UTC m=+48.399598858" watchObservedRunningTime="2025-11-08 10:19:31.553479484 +0000 UTC m=+48.424035571"
	Nov 08 10:19:33 default-k8s-diff-port-689864 kubelet[1327]: I1108 10:19:33.798670    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2n7\" (UniqueName: \"kubernetes.io/projected/78e08397-121e-44c5-9cc0-d303ab0890eb-kube-api-access-sc2n7\") pod \"busybox\" (UID: \"78e08397-121e-44c5-9cc0-d303ab0890eb\") " pod="default/busybox"
	Nov 08 10:19:34 default-k8s-diff-port-689864 kubelet[1327]: W1108 10:19:34.074853    1327 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85 WatchSource:0}: Error finding container 3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85: Status 404 returned error can't find the container with id 3415a2311a0769271b4a9d2a609060729c766b9a0e6193f14022c5a645d90e85
	
	
	==> storage-provisioner [668596a67d1031f972b6f0b0e2c0608d11a40e5c2ebeda706a45cc922b1b2d62] <==
	I1108 10:19:31.116257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:19:31.167686       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:19:31.167927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:19:31.172522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:31.188338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:19:31.188505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:19:31.188682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_19bbadfe-b91d-4e9d-9b68-93bf2a24f4dc!
	I1108 10:19:31.189665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ef98997-9490-4868-b14f-87f19e537ac2", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-689864_19bbadfe-b91d-4e9d-9b68-93bf2a24f4dc became leader
	W1108 10:19:31.203417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:31.235896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:19:31.293296       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_19bbadfe-b91d-4e9d-9b68-93bf2a24f4dc!
	W1108 10:19:33.238960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:33.245655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:35.259472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:35.268027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:37.271212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:37.278887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:39.281734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:39.286236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:41.291169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:41.297960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:43.303078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:19:43.315032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-330758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-330758 --alsologtostderr -v=1: exit status 80 (2.05610868s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-330758 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:19:53.811300  501514 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:19:53.811453  501514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:53.811459  501514 out.go:374] Setting ErrFile to fd 2...
	I1108 10:19:53.811465  501514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:53.811743  501514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:19:53.811984  501514 out.go:368] Setting JSON to false
	I1108 10:19:53.812010  501514 mustload.go:66] Loading cluster: newest-cni-330758
	I1108 10:19:53.812450  501514 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:53.812960  501514 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:53.838019  501514 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:53.838358  501514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:53.951543  501514 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-08 10:19:53.939213092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:53.952194  501514 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-330758 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:19:53.955515  501514 out.go:179] * Pausing node newest-cni-330758 ... 
	I1108 10:19:53.958431  501514 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:53.958775  501514 ssh_runner.go:195] Run: systemctl --version
	I1108 10:19:53.958826  501514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:53.985533  501514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:54.100094  501514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:19:54.116041  501514 pause.go:52] kubelet running: true
	I1108 10:19:54.116105  501514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:19:54.400870  501514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:19:54.401078  501514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:19:54.478740  501514 cri.go:89] found id: "271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1"
	I1108 10:19:54.478760  501514 cri.go:89] found id: "d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737"
	I1108 10:19:54.478771  501514 cri.go:89] found id: "442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b"
	I1108 10:19:54.478774  501514 cri.go:89] found id: "e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde"
	I1108 10:19:54.478778  501514 cri.go:89] found id: "a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78"
	I1108 10:19:54.478781  501514 cri.go:89] found id: "373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a"
	I1108 10:19:54.478784  501514 cri.go:89] found id: ""
	I1108 10:19:54.478840  501514 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:19:54.494864  501514 retry.go:31] will retry after 304.764446ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:54Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:19:54.800477  501514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:19:54.813883  501514 pause.go:52] kubelet running: false
	I1108 10:19:54.813998  501514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:19:54.954889  501514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:19:54.955010  501514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:19:55.056962  501514 cri.go:89] found id: "271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1"
	I1108 10:19:55.056984  501514 cri.go:89] found id: "d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737"
	I1108 10:19:55.056989  501514 cri.go:89] found id: "442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b"
	I1108 10:19:55.056992  501514 cri.go:89] found id: "e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde"
	I1108 10:19:55.056995  501514 cri.go:89] found id: "a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78"
	I1108 10:19:55.056999  501514 cri.go:89] found id: "373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a"
	I1108 10:19:55.057002  501514 cri.go:89] found id: ""
	I1108 10:19:55.057055  501514 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:19:55.071360  501514 retry.go:31] will retry after 458.064825ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:55Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:19:55.529704  501514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:19:55.543036  501514 pause.go:52] kubelet running: false
	I1108 10:19:55.543111  501514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:19:55.678395  501514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:19:55.678495  501514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:19:55.746526  501514 cri.go:89] found id: "271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1"
	I1108 10:19:55.746555  501514 cri.go:89] found id: "d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737"
	I1108 10:19:55.746571  501514 cri.go:89] found id: "442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b"
	I1108 10:19:55.746577  501514 cri.go:89] found id: "e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde"
	I1108 10:19:55.746589  501514 cri.go:89] found id: "a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78"
	I1108 10:19:55.746595  501514 cri.go:89] found id: "373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a"
	I1108 10:19:55.746598  501514 cri.go:89] found id: ""
	I1108 10:19:55.746649  501514 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:19:55.761733  501514 out.go:203] 
	W1108 10:19:55.764667  501514 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:19:55.764698  501514 out.go:285] * 
	* 
	W1108 10:19:55.771715  501514 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:19:55.774676  501514 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-330758 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-330758
helpers_test.go:243: (dbg) docker inspect newest-cni-330758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	        "Created": "2025-11-08T10:19:03.891974469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 499261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:19:37.280270836Z",
	            "FinishedAt": "2025-11-08T10:19:36.355003297Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hosts",
	        "LogPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55-json.log",
	        "Name": "/newest-cni-330758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-330758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-330758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	                "LowerDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-330758",
	                "Source": "/var/lib/docker/volumes/newest-cni-330758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-330758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-330758",
	                "name.minikube.sigs.k8s.io": "newest-cni-330758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00e00c49cb86f063123155fcafd4906279dfa3051d7e1dad87d3dfd40cf6fb2f",
	            "SandboxKey": "/var/run/docker/netns/00e00c49cb86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-330758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:29:92:3d:b7:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "944292dd69993087fe1b211f0e5fa77d84eca9279fd41eb0187ac090cde431bf",
	                    "EndpointID": "0ac832142fd8058cd1277565d67a876079d1fff148674a5ef1f40b878a4baf01",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-330758",
	                        "7ffe9198584b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758: exit status 2 (350.350864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25: (1.084500305s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p newest-cni-330758 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-689864 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ image   │ newest-cni-330758 image list --format=json                                                                                                                                                                                                    │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-330758 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:19:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:19:36.998130  499131 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:19:36.998320  499131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:36.998352  499131 out.go:374] Setting ErrFile to fd 2...
	I1108 10:19:36.998373  499131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:36.998654  499131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:19:36.999080  499131 out.go:368] Setting JSON to false
	I1108 10:19:37.000213  499131 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10926,"bootTime":1762586251,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:19:37.000352  499131 start.go:143] virtualization:  
	I1108 10:19:37.004269  499131 out.go:179] * [newest-cni-330758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:19:37.008250  499131 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:19:37.008343  499131 notify.go:221] Checking for updates...
	I1108 10:19:37.014443  499131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:19:37.017715  499131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:37.020980  499131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:19:37.024259  499131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:19:37.027365  499131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:19:37.030792  499131 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:37.031385  499131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:19:37.059300  499131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:19:37.059419  499131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:37.122642  499131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:19:37.112943602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:37.122750  499131 docker.go:319] overlay module found
	I1108 10:19:37.126169  499131 out.go:179] * Using the docker driver based on existing profile
	I1108 10:19:37.129016  499131 start.go:309] selected driver: docker
	I1108 10:19:37.129036  499131 start.go:930] validating driver "docker" against &{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:37.129148  499131 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:19:37.129853  499131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:37.186429  499131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:19:37.176179061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:37.186758  499131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:19:37.186794  499131 cni.go:84] Creating CNI manager for ""
	I1108 10:19:37.186850  499131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:37.186891  499131 start.go:353] cluster config:
	{Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:37.191917  499131 out.go:179] * Starting "newest-cni-330758" primary control-plane node in "newest-cni-330758" cluster
	I1108 10:19:37.194808  499131 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:19:37.197683  499131 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:19:37.200532  499131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:19:37.200591  499131 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:19:37.200604  499131 cache.go:59] Caching tarball of preloaded images
	I1108 10:19:37.200641  499131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:19:37.200701  499131 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:19:37.200711  499131 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:19:37.200835  499131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:19:37.219994  499131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:19:37.220018  499131 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:19:37.220036  499131 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:19:37.220059  499131 start.go:360] acquireMachinesLock for newest-cni-330758: {Name:mka68247f3ee22af15ad7dc6cf73067d1036d0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:19:37.220132  499131 start.go:364] duration metric: took 46.048µs to acquireMachinesLock for "newest-cni-330758"
	I1108 10:19:37.220155  499131 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:19:37.220162  499131 fix.go:54] fixHost starting: 
	I1108 10:19:37.220419  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:37.238233  499131 fix.go:112] recreateIfNeeded on newest-cni-330758: state=Stopped err=<nil>
	W1108 10:19:37.238267  499131 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:19:37.241635  499131 out.go:252] * Restarting existing docker container for "newest-cni-330758" ...
	I1108 10:19:37.241727  499131 cli_runner.go:164] Run: docker start newest-cni-330758
	I1108 10:19:37.499830  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:37.519292  499131 kic.go:430] container "newest-cni-330758" state is running.
	I1108 10:19:37.519708  499131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:37.539601  499131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/config.json ...
	I1108 10:19:37.539967  499131 machine.go:94] provisionDockerMachine start ...
	I1108 10:19:37.540055  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:37.561612  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:37.562003  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:37.562022  499131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:19:37.562675  499131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:19:40.712831  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:40.712854  499131 ubuntu.go:182] provisioning hostname "newest-cni-330758"
	I1108 10:19:40.712969  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:40.731890  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:40.732220  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:40.732237  499131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-330758 && echo "newest-cni-330758" | sudo tee /etc/hostname
	I1108 10:19:40.898060  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-330758
	
	I1108 10:19:40.898151  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:40.915283  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:40.915597  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:40.915619  499131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-330758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-330758/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-330758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:19:41.066019  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:19:41.066056  499131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:19:41.066080  499131 ubuntu.go:190] setting up certificates
	I1108 10:19:41.066091  499131 provision.go:84] configureAuth start
	I1108 10:19:41.066154  499131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:41.083513  499131 provision.go:143] copyHostCerts
	I1108 10:19:41.083586  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:19:41.083600  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:19:41.083681  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:19:41.083792  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:19:41.083802  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:19:41.083830  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:19:41.083902  499131 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:19:41.083912  499131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:19:41.083936  499131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:19:41.084043  499131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.newest-cni-330758 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-330758]
	I1108 10:19:41.293780  499131 provision.go:177] copyRemoteCerts
	I1108 10:19:41.293875  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:19:41.293937  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.312070  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:41.420665  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:19:41.437797  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:19:41.456610  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:19:41.476538  499131 provision.go:87] duration metric: took 410.41973ms to configureAuth
	I1108 10:19:41.476565  499131 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:19:41.476771  499131 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:41.476896  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.494178  499131 main.go:143] libmachine: Using SSH client type: native
	I1108 10:19:41.494622  499131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1108 10:19:41.494644  499131 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:19:41.812050  499131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:19:41.812071  499131 machine.go:97] duration metric: took 4.27209174s to provisionDockerMachine
	I1108 10:19:41.812082  499131 start.go:293] postStartSetup for "newest-cni-330758" (driver="docker")
	I1108 10:19:41.812094  499131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:19:41.812181  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:19:41.812221  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:41.851190  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:41.975472  499131 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:19:41.985706  499131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:19:41.985739  499131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:19:41.985750  499131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:19:41.985807  499131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:19:41.985885  499131 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:19:41.985990  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:19:41.996314  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:19:42.025601  499131 start.go:296] duration metric: took 213.501956ms for postStartSetup
	I1108 10:19:42.025782  499131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:19:42.025831  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:42.058651  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:42.167117  499131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:19:42.174773  499131 fix.go:56] duration metric: took 4.954600409s for fixHost
	I1108 10:19:42.174818  499131 start.go:83] releasing machines lock for "newest-cni-330758", held for 4.954672557s
	I1108 10:19:42.174941  499131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-330758
	I1108 10:19:42.200533  499131 ssh_runner.go:195] Run: cat /version.json
	I1108 10:19:42.200600  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:42.200630  499131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:19:42.200692  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:42.245169  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:42.247314  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:42.479836  499131 ssh_runner.go:195] Run: systemctl --version
	I1108 10:19:42.489971  499131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:19:42.562551  499131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:19:42.569413  499131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:19:42.569488  499131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:19:42.580411  499131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:19:42.580433  499131 start.go:496] detecting cgroup driver to use...
	I1108 10:19:42.580463  499131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:19:42.580512  499131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:19:42.597508  499131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:19:42.612795  499131 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:19:42.612856  499131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:19:42.629578  499131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:19:42.643649  499131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:19:42.804864  499131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:19:42.990749  499131 docker.go:234] disabling docker service ...
	I1108 10:19:42.990838  499131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:19:43.014226  499131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:19:43.031242  499131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:19:43.186839  499131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:19:43.361541  499131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:19:43.378231  499131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:19:43.399573  499131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:19:43.399646  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.415864  499131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:19:43.415929  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.425952  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.435075  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.443930  499131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:19:43.451906  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.462169  499131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.474581  499131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:19:43.483724  499131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:19:43.492754  499131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:19:43.502149  499131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:43.668528  499131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:19:43.848212  499131 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:19:43.848307  499131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:19:43.855869  499131 start.go:564] Will wait 60s for crictl version
	I1108 10:19:43.855943  499131 ssh_runner.go:195] Run: which crictl
	I1108 10:19:43.861845  499131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:19:43.899409  499131 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:19:43.899489  499131 ssh_runner.go:195] Run: crio --version
	I1108 10:19:43.937028  499131 ssh_runner.go:195] Run: crio --version
	I1108 10:19:43.974715  499131 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:19:43.977767  499131 cli_runner.go:164] Run: docker network inspect newest-cni-330758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:19:44.008241  499131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:19:44.013121  499131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:19:44.029429  499131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 10:19:44.032312  499131 kubeadm.go:884] updating cluster {Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:19:44.032469  499131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:19:44.032546  499131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:19:44.082713  499131 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:19:44.082804  499131 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:19:44.082932  499131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:19:44.129939  499131 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:19:44.129959  499131 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:19:44.129967  499131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:19:44.130061  499131 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-330758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:19:44.130155  499131 ssh_runner.go:195] Run: crio config
	I1108 10:19:44.211998  499131 cni.go:84] Creating CNI manager for ""
	I1108 10:19:44.212062  499131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:44.212101  499131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:19:44.212154  499131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-330758 NodeName:newest-cni-330758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:19:44.212318  499131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-330758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:19:44.212414  499131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:19:44.225370  499131 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:19:44.225514  499131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:19:44.236857  499131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:19:44.258725  499131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:19:44.277849  499131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:19:44.292670  499131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:19:44.297506  499131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:19:44.308337  499131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:44.449407  499131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:19:44.471372  499131 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758 for IP: 192.168.76.2
	I1108 10:19:44.471443  499131 certs.go:195] generating shared ca certs ...
	I1108 10:19:44.471473  499131 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:44.471651  499131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:19:44.471727  499131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:19:44.471752  499131 certs.go:257] generating profile certs ...
	I1108 10:19:44.471865  499131 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/client.key
	I1108 10:19:44.471966  499131 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key.8c8c918e
	I1108 10:19:44.472031  499131 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key
	I1108 10:19:44.472152  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:19:44.472201  499131 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:19:44.472225  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:19:44.472267  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:19:44.472315  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:19:44.472357  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:19:44.472429  499131 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:19:44.492255  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:19:44.547714  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:19:44.602570  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:19:44.633415  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:19:44.677623  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:19:44.739340  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:19:44.771577  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:19:44.795587  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/newest-cni-330758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:19:44.814417  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:19:44.838838  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:19:44.863259  499131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:19:44.890218  499131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:19:44.904266  499131 ssh_runner.go:195] Run: openssl version
	I1108 10:19:44.919851  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:19:44.929252  499131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:19:44.934388  499131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:19:44.934451  499131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:19:44.990654  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:19:44.999764  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:19:45.027525  499131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:19:45.032700  499131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:19:45.032812  499131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:19:45.089531  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:19:45.104214  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:19:45.118831  499131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:45.128135  499131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:45.128218  499131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:19:45.178542  499131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:19:45.190360  499131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:19:45.196389  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:19:45.277514  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:19:45.446552  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:19:45.581308  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:19:45.726564  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:19:45.814678  499131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:19:45.972284  499131 kubeadm.go:401] StartCluster: {Name:newest-cni-330758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-330758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:45.972374  499131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:19:45.972446  499131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:19:46.016290  499131 cri.go:89] found id: "442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b"
	I1108 10:19:46.016309  499131 cri.go:89] found id: "e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde"
	I1108 10:19:46.016314  499131 cri.go:89] found id: "a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78"
	I1108 10:19:46.016317  499131 cri.go:89] found id: "373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a"
	I1108 10:19:46.016321  499131 cri.go:89] found id: ""
	I1108 10:19:46.016377  499131 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:19:46.029011  499131 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:19:46Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:19:46.029099  499131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:19:46.055882  499131 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:19:46.055949  499131 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:19:46.056018  499131 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:19:46.069388  499131 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:19:46.069997  499131 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-330758" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:46.070295  499131 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-330758" cluster setting kubeconfig missing "newest-cni-330758" context setting]
	I1108 10:19:46.070772  499131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:46.072525  499131 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:19:46.084100  499131 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:19:46.084173  499131 kubeadm.go:602] duration metric: took 28.203331ms to restartPrimaryControlPlane
	I1108 10:19:46.084205  499131 kubeadm.go:403] duration metric: took 111.932381ms to StartCluster
	I1108 10:19:46.084236  499131 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:46.084310  499131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:46.085244  499131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:19:46.085510  499131 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:19:46.085916  499131 config.go:182] Loaded profile config "newest-cni-330758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:46.085970  499131 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:19:46.086126  499131 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-330758"
	I1108 10:19:46.086144  499131 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-330758"
	W1108 10:19:46.086151  499131 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:19:46.086176  499131 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:46.086226  499131 addons.go:70] Setting dashboard=true in profile "newest-cni-330758"
	I1108 10:19:46.086260  499131 addons.go:239] Setting addon dashboard=true in "newest-cni-330758"
	W1108 10:19:46.086282  499131 addons.go:248] addon dashboard should already be in state true
	I1108 10:19:46.086357  499131 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:46.086669  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:46.086935  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:46.087407  499131 addons.go:70] Setting default-storageclass=true in profile "newest-cni-330758"
	I1108 10:19:46.087430  499131 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-330758"
	I1108 10:19:46.087795  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:46.090090  499131 out.go:179] * Verifying Kubernetes components...
	I1108 10:19:46.096216  499131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:19:46.137426  499131 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:19:46.141759  499131 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:19:46.144622  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:19:46.144674  499131 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:19:46.144814  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:46.158971  499131 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:19:46.159122  499131 addons.go:239] Setting addon default-storageclass=true in "newest-cni-330758"
	W1108 10:19:46.159136  499131 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:19:46.159162  499131 host.go:66] Checking if "newest-cni-330758" exists ...
	I1108 10:19:46.159617  499131 cli_runner.go:164] Run: docker container inspect newest-cni-330758 --format={{.State.Status}}
	I1108 10:19:46.162842  499131 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:19:46.162861  499131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:19:46.162920  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:46.213023  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:46.222363  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:46.222895  499131 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:19:46.222909  499131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:19:46.222961  499131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-330758
	I1108 10:19:46.262902  499131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/newest-cni-330758/id_rsa Username:docker}
	I1108 10:19:46.426131  499131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:19:46.450605  499131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:19:46.455923  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:19:46.455948  499131 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:19:46.515263  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:19:46.515290  499131 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:19:46.531676  499131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:19:46.619393  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:19:46.619458  499131 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:19:46.703885  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:19:46.703957  499131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:19:46.738159  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:19:46.738211  499131 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:19:46.768293  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:19:46.768317  499131 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:19:46.800995  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:19:46.801021  499131 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:19:46.846168  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:19:46.846212  499131 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:19:46.892365  499131 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:19:46.892387  499131 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:19:46.933965  499131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:19:52.711030  499131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.284862606s)
	I1108 10:19:52.711132  499131 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.260501994s)
	I1108 10:19:52.711365  499131 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:19:52.711446  499131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:19:52.711160  499131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.179462053s)
	I1108 10:19:52.711258  499131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.777266069s)
	I1108 10:19:52.715637  499131 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-330758 addons enable metrics-server
	
	I1108 10:19:52.732380  499131 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:19:52.735455  499131 addons.go:515] duration metric: took 6.649473008s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:19:52.747175  499131 api_server.go:72] duration metric: took 6.661601215s to wait for apiserver process to appear ...
	I1108 10:19:52.747248  499131 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:19:52.747283  499131 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:19:52.755813  499131 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:19:52.757058  499131 api_server.go:141] control plane version: v1.34.1
	I1108 10:19:52.757114  499131 api_server.go:131] duration metric: took 9.847028ms to wait for apiserver health ...
	I1108 10:19:52.757139  499131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:19:52.761550  499131 system_pods.go:59] 8 kube-system pods found
	I1108 10:19:52.761627  499131 system_pods.go:61] "coredns-66bc5c9577-4zq2p" [148b4a8d-04ba-4b85-ba4c-aa7ff04adeeb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:19:52.761651  499131 system_pods.go:61] "etcd-newest-cni-330758" [b16f4406-54aa-41c8-922d-4d459430fb85] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:19:52.761677  499131 system_pods.go:61] "kindnet-2cmcs" [c14e613a-b33c-4bde-9cd9-0bf775170ccf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:19:52.761701  499131 system_pods.go:61] "kube-apiserver-newest-cni-330758" [67075241-8851-41d1-84f3-8e21d612ad3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:19:52.761726  499131 system_pods.go:61] "kube-controller-manager-newest-cni-330758" [0ba4c4f4-eb49-4069-8a07-04dedb66da92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:19:52.761759  499131 system_pods.go:61] "kube-proxy-hzls4" [c81513fd-e2c2-4e11-a842-c8ae0ceaed28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:19:52.761785  499131 system_pods.go:61] "kube-scheduler-newest-cni-330758" [4530a83c-b97c-4e17-b43c-e3333e2c0ead] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:19:52.761806  499131 system_pods.go:61] "storage-provisioner" [de7cb71c-4551-4e2b-a71b-9fea74e783e2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:19:52.761838  499131 system_pods.go:74] duration metric: took 4.678437ms to wait for pod list to return data ...
	I1108 10:19:52.761866  499131 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:19:52.764341  499131 default_sa.go:45] found service account: "default"
	I1108 10:19:52.764393  499131 default_sa.go:55] duration metric: took 2.505295ms for default service account to be created ...
	I1108 10:19:52.764419  499131 kubeadm.go:587] duration metric: took 6.678850358s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:19:52.764457  499131 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:19:52.767123  499131 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:19:52.767187  499131 node_conditions.go:123] node cpu capacity is 2
	I1108 10:19:52.767213  499131 node_conditions.go:105] duration metric: took 2.735132ms to run NodePressure ...
	I1108 10:19:52.767239  499131 start.go:242] waiting for startup goroutines ...
	I1108 10:19:52.767273  499131 start.go:247] waiting for cluster config update ...
	I1108 10:19:52.767303  499131 start.go:256] writing updated cluster config ...
	I1108 10:19:52.767617  499131 ssh_runner.go:195] Run: rm -f paused
	I1108 10:19:52.851232  499131 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:19:52.854636  499131 out.go:179] * Done! kubectl is now configured to use "newest-cni-330758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.958755545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.965237692Z" level=info msg="Running pod sandbox: kube-system/kindnet-2cmcs/POD" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.965306222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.972175047Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.979182244Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=77c95be2-8d5f-4854-a60e-ea0cb0a22a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.987938234Z" level=info msg="Ran pod sandbox 8626285bea4a07468b8ec96fa821f55f03f7288c03cd8ceecfcfff140fd2eb0e with infra container: kube-system/kindnet-2cmcs/POD" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.007476794Z" level=info msg="Ran pod sandbox 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3 with infra container: kube-system/kube-proxy-hzls4/POD" id=77c95be2-8d5f-4854-a60e-ea0cb0a22a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.008797094Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=07288f52-d08e-40e7-8e0f-4a9a4b674eb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.020974606Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=27320dbd-436d-4a7e-ab33-ca406b56d0d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.021519481Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c0a7bb2c-1054-40fb-b3da-b0f3da3fb2b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.02924735Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b72e5fc1-e887-4120-bd43-a9f0b1b01f54 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.030002738Z" level=info msg="Creating container: kube-system/kindnet-2cmcs/kindnet-cni" id=ba3acc46-9a18-4d40-bdeb-19b6277dbc97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.030237908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.034860155Z" level=info msg="Creating container: kube-system/kube-proxy-hzls4/kube-proxy" id=15313b94-741c-4e7b-9d1a-929e5369b2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.035134349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.045795366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.073717859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.089763347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.090760912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.124017634Z" level=info msg="Created container d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737: kube-system/kindnet-2cmcs/kindnet-cni" id=ba3acc46-9a18-4d40-bdeb-19b6277dbc97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.124686031Z" level=info msg="Starting container: d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737" id=a7477d80-00c2-4516-a2a3-ff045a113c33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.136268977Z" level=info msg="Created container 271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1: kube-system/kube-proxy-hzls4/kube-proxy" id=15313b94-741c-4e7b-9d1a-929e5369b2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.142980467Z" level=info msg="Starting container: 271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1" id=5c7c42be-c90c-415a-88c5-09f29471ff2f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.143455163Z" level=info msg="Started container" PID=1053 containerID=d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737 description=kube-system/kindnet-2cmcs/kindnet-cni id=a7477d80-00c2-4516-a2a3-ff045a113c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8626285bea4a07468b8ec96fa821f55f03f7288c03cd8ceecfcfff140fd2eb0e
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.147477865Z" level=info msg="Started container" PID=1057 containerID=271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1 description=kube-system/kube-proxy-hzls4/kube-proxy id=5c7c42be-c90c-415a-88c5-09f29471ff2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	271859b9bb8f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   755a5e9239c69       kube-proxy-hzls4                            kube-system
	d3fe2b6c6eb42       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   8626285bea4a0       kindnet-2cmcs                               kube-system
	442b8ee7bc4dc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   5e2ea6df1a372       kube-apiserver-newest-cni-330758            kube-system
	e54a15ae8d0d9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   9c7608274fed1       kube-controller-manager-newest-cni-330758   kube-system
	a4286dc9f44f8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   76be99502b57f       etcd-newest-cni-330758                      kube-system
	373eb15cdaeec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   a8bd6d108424b       kube-scheduler-newest-cni-330758            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-330758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-330758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-330758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_19_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:19:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-330758
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:19:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-330758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                5853ff61-bbc9-4baf-94c5-07acd84b90c2
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-330758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-2cmcs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-330758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-330758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-hzls4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-330758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-330758 event: Registered Node newest-cni-330758 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-330758 event: Registered Node newest-cni-330758 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[ +26.370836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78] <==
	{"level":"warn","ts":"2025-11-08T10:19:49.089392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.111125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.157847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.178968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.206160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.255227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.274664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.301138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.323378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.365622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.409793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.453117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.493463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.519108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.561453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.586580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.607775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.635752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.670783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.685648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.788474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.817293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.850292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.871466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.971102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:56 up  3:02,  0 user,  load average: 4.07, 3.85, 2.95
	Linux newest-cni-330758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737] <==
	I1108 10:19:52.233878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:19:52.239159       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:19:52.239293       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:19:52.239312       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:19:52.239328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:19:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:19:52.443954       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:19:52.443975       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:19:52.443983       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:19:52.444785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b] <==
	I1108 10:19:51.376272       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:19:51.376279       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:19:51.376286       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:19:51.384966       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:19:51.385019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:19:51.387069       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:19:51.392147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:19:51.392180       1 policy_source.go:240] refreshing policies
	I1108 10:19:51.393655       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:19:51.445034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:19:51.445064       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:19:51.445364       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:19:51.457939       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 10:19:51.477611       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:19:51.718886       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:19:51.881409       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:19:52.206646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:19:52.304002       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:19:52.382521       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:19:52.407177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:19:52.581620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.73.187"}
	I1108 10:19:52.622509       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.30.16"}
	I1108 10:19:54.595605       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:19:55.046509       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:19:55.194775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde] <==
	I1108 10:19:54.612190       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:19:54.619297       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:19:54.619325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:19:54.619332       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:19:54.621378       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:19:54.623669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:19:54.626666       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:19:54.630380       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:19:54.633070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:19:54.636426       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:19:54.636451       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:19:54.639442       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:19:54.639960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:19:54.640005       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:19:54.642416       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:19:54.642543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:19:54.642586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:19:54.642615       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:19:54.642637       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:19:54.643922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:19:54.643982       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:19:54.644546       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:19:54.652022       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:19:54.659254       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:19:54.665546       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1] <==
	I1108 10:19:52.450726       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:19:52.670554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:19:52.773003       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:19:52.773139       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:19:52.776494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:19:53.035511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:19:53.035639       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:19:53.058601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:19:53.059010       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:19:53.059331       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:19:53.060655       1 config.go:200] "Starting service config controller"
	I1108 10:19:53.060759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:19:53.060807       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:19:53.060856       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:19:53.060903       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:19:53.060965       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:19:53.061853       1 config.go:309] "Starting node config controller"
	I1108 10:19:53.061912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:19:53.061959       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:19:53.161452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:19:53.161495       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:19:53.161532       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a] <==
	I1108 10:19:51.017763       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:19:54.262174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:19:54.262280       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:19:54.276828       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:19:54.277047       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:19:54.277840       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:19:54.281241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:19:54.281953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:19:54.281711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.282074       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.277379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:19:54.377991       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:19:54.383118       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.383189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:19:46 newest-cni-330758 kubelet[727]: E1108 10:19:46.885807     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-330758\" not found" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.060981     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431792     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431913     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431956     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.434537     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.481814     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-330758\" already exists" pod="kube-system/kube-apiserver-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.481861     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.495361     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-330758\" already exists" pod="kube-system/kube-controller-manager-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.495413     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.507687     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-330758\" already exists" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.507744     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.526594     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-330758\" already exists" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.644181     727 apiserver.go:52] "Watching apiserver"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.660822     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.673824     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-lib-modules\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674019     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-xtables-lock\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674134     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-xtables-lock\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674221     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-cni-cfg\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674292     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-lib-modules\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.769141     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:19:52 newest-cni-330758 kubelet[727]: W1108 10:19:52.008336     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/crio-755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3 WatchSource:0}: Error finding container 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3: Status 404 returned error can't find the container with id 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-330758 -n newest-cni-330758: exit status 2 (470.746305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-330758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64: exit status 1 (118.471693ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4zq2p" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-k5mfh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-75h64" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-330758
helpers_test.go:243: (dbg) docker inspect newest-cni-330758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	        "Created": "2025-11-08T10:19:03.891974469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 499261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:19:37.280270836Z",
	            "FinishedAt": "2025-11-08T10:19:36.355003297Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/hosts",
	        "LogPath": "/var/lib/docker/containers/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55-json.log",
	        "Name": "/newest-cni-330758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-330758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-330758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55",
	                "LowerDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08986b8d0923606893690cb26005e155350dda06f51ea06e6cbe171ba074ee8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-330758",
	                "Source": "/var/lib/docker/volumes/newest-cni-330758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-330758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-330758",
	                "name.minikube.sigs.k8s.io": "newest-cni-330758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00e00c49cb86f063123155fcafd4906279dfa3051d7e1dad87d3dfd40cf6fb2f",
	            "SandboxKey": "/var/run/docker/netns/00e00c49cb86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-330758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:29:92:3d:b7:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "944292dd69993087fe1b211f0e5fa77d84eca9279fd41eb0187ac090cde431bf",
	                    "EndpointID": "0ac832142fd8058cd1277565d67a876079d1fff148674a5ef1f40b878a4baf01",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-330758",
	                        "7ffe9198584b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758: exit status 2 (479.383094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-330758 logs -n 25: (1.375181693s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-606645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │                     │
	│ stop    │ -p embed-certs-606645 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:17 UTC │ 08 Nov 25 10:18 UTC │
	│ image   │ no-preload-872727 image list --format=json                                                                                                                                                                                                    │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p newest-cni-330758 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-689864 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ newest-cni-330758 image list --format=json                                                                                                                                                                                                    │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-330758 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-689864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:19:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:19:58.008164  502245 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:19:58.008375  502245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:58.008400  502245 out.go:374] Setting ErrFile to fd 2...
	I1108 10:19:58.008421  502245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:19:58.009012  502245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:19:58.009597  502245 out.go:368] Setting JSON to false
	I1108 10:19:58.011068  502245 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10947,"bootTime":1762586251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:19:58.011155  502245 start.go:143] virtualization:  
	I1108 10:19:58.014145  502245 out.go:179] * [default-k8s-diff-port-689864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:19:58.018035  502245 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:19:58.018107  502245 notify.go:221] Checking for updates...
	I1108 10:19:58.023966  502245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:19:58.026999  502245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:19:58.030012  502245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:19:58.032997  502245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:19:58.035930  502245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:19:58.039299  502245 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:19:58.039870  502245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:19:58.080693  502245 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:19:58.080807  502245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:58.172058  502245 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:19:58.1606267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:58.172161  502245 docker.go:319] overlay module found
	I1108 10:19:58.177276  502245 out.go:179] * Using the docker driver based on existing profile
	I1108 10:19:58.180178  502245 start.go:309] selected driver: docker
	I1108 10:19:58.180203  502245 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:58.180393  502245 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:19:58.181227  502245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:19:58.268290  502245 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:19:58.258006137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:19:58.268643  502245 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:19:58.268675  502245 cni.go:84] Creating CNI manager for ""
	I1108 10:19:58.268757  502245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:19:58.268805  502245 start.go:353] cluster config:
	{Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:19:58.272130  502245 out.go:179] * Starting "default-k8s-diff-port-689864" primary control-plane node in "default-k8s-diff-port-689864" cluster
	I1108 10:19:58.274960  502245 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:19:58.277837  502245 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:19:58.280644  502245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:19:58.280714  502245 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:19:58.280726  502245 cache.go:59] Caching tarball of preloaded images
	I1108 10:19:58.280766  502245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:19:58.280808  502245 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:19:58.280818  502245 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:19:58.280952  502245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/config.json ...
	I1108 10:19:58.315833  502245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:19:58.315852  502245 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:19:58.315864  502245 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:19:58.315886  502245 start.go:360] acquireMachinesLock for default-k8s-diff-port-689864: {Name:mk8e02949baf85c4a0d930cca199e546b49684a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:19:58.315941  502245 start.go:364] duration metric: took 33.526µs to acquireMachinesLock for "default-k8s-diff-port-689864"
	I1108 10:19:58.315962  502245 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:19:58.315968  502245 fix.go:54] fixHost starting: 
	I1108 10:19:58.316249  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:19:58.341660  502245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-689864: state=Stopped err=<nil>
	W1108 10:19:58.341694  502245 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.958755545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.965237692Z" level=info msg="Running pod sandbox: kube-system/kindnet-2cmcs/POD" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.965306222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.972175047Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.979182244Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=77c95be2-8d5f-4854-a60e-ea0cb0a22a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:51 newest-cni-330758 crio[611]: time="2025-11-08T10:19:51.987938234Z" level=info msg="Ran pod sandbox 8626285bea4a07468b8ec96fa821f55f03f7288c03cd8ceecfcfff140fd2eb0e with infra container: kube-system/kindnet-2cmcs/POD" id=11c5704b-ea52-470b-8af2-37b75d29edd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.007476794Z" level=info msg="Ran pod sandbox 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3 with infra container: kube-system/kube-proxy-hzls4/POD" id=77c95be2-8d5f-4854-a60e-ea0cb0a22a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.008797094Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=07288f52-d08e-40e7-8e0f-4a9a4b674eb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.020974606Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=27320dbd-436d-4a7e-ab33-ca406b56d0d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.021519481Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c0a7bb2c-1054-40fb-b3da-b0f3da3fb2b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.02924735Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b72e5fc1-e887-4120-bd43-a9f0b1b01f54 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.030002738Z" level=info msg="Creating container: kube-system/kindnet-2cmcs/kindnet-cni" id=ba3acc46-9a18-4d40-bdeb-19b6277dbc97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.030237908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.034860155Z" level=info msg="Creating container: kube-system/kube-proxy-hzls4/kube-proxy" id=15313b94-741c-4e7b-9d1a-929e5369b2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.035134349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.045795366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.073717859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.089763347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.090760912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.124017634Z" level=info msg="Created container d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737: kube-system/kindnet-2cmcs/kindnet-cni" id=ba3acc46-9a18-4d40-bdeb-19b6277dbc97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.124686031Z" level=info msg="Starting container: d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737" id=a7477d80-00c2-4516-a2a3-ff045a113c33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.136268977Z" level=info msg="Created container 271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1: kube-system/kube-proxy-hzls4/kube-proxy" id=15313b94-741c-4e7b-9d1a-929e5369b2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.142980467Z" level=info msg="Starting container: 271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1" id=5c7c42be-c90c-415a-88c5-09f29471ff2f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.143455163Z" level=info msg="Started container" PID=1053 containerID=d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737 description=kube-system/kindnet-2cmcs/kindnet-cni id=a7477d80-00c2-4516-a2a3-ff045a113c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8626285bea4a07468b8ec96fa821f55f03f7288c03cd8ceecfcfff140fd2eb0e
	Nov 08 10:19:52 newest-cni-330758 crio[611]: time="2025-11-08T10:19:52.147477865Z" level=info msg="Started container" PID=1057 containerID=271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1 description=kube-system/kube-proxy-hzls4/kube-proxy id=5c7c42be-c90c-415a-88c5-09f29471ff2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	271859b9bb8f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   755a5e9239c69       kube-proxy-hzls4                            kube-system
	d3fe2b6c6eb42       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   8626285bea4a0       kindnet-2cmcs                               kube-system
	442b8ee7bc4dc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   5e2ea6df1a372       kube-apiserver-newest-cni-330758            kube-system
	e54a15ae8d0d9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   9c7608274fed1       kube-controller-manager-newest-cni-330758   kube-system
	a4286dc9f44f8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   76be99502b57f       etcd-newest-cni-330758                      kube-system
	373eb15cdaeec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   a8bd6d108424b       kube-scheduler-newest-cni-330758            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-330758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-330758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-330758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_19_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:19:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-330758
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:19:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:19:51 +0000   Sat, 08 Nov 2025 10:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-330758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                5853ff61-bbc9-4baf-94c5-07acd84b90c2
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-330758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-2cmcs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-330758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-330758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-hzls4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-330758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-330758 event: Registered Node newest-cni-330758 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-330758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-330758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-330758 event: Registered Node newest-cni-330758 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:56] overlayfs: idmapped layers are currently not supported
	[  +9.939804] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[ +26.370836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a4286dc9f44f82ab88acd7e9c19cfbf20912d66b27a2c44fbc921f01a0a88a78] <==
	{"level":"warn","ts":"2025-11-08T10:19:49.089392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.111125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.157847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.178968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.206160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.255227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.274664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.301138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.323378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.365622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.409793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.453117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.493463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.519108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.561453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.586580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.607775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.635752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.670783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.685648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.788474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.817293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.850292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.871466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:19:49.971102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:59 up  3:02,  0 user,  load average: 4.07, 3.85, 2.95
	Linux newest-cni-330758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3fe2b6c6eb4246722428ebbe751416f8a60943764e57f2f094720a403d2c737] <==
	I1108 10:19:52.233878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:19:52.239159       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:19:52.239293       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:19:52.239312       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:19:52.239328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:19:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:19:52.443954       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:19:52.443975       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:19:52.443983       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:19:52.444785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [442b8ee7bc4dc3da6db96f32052f46bec841c611c0de85bc222a8cba925c1a7b] <==
	I1108 10:19:51.376272       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:19:51.376279       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:19:51.376286       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:19:51.384966       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:19:51.385019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:19:51.387069       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:19:51.392147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:19:51.392180       1 policy_source.go:240] refreshing policies
	I1108 10:19:51.393655       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:19:51.445034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:19:51.445064       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:19:51.445364       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:19:51.457939       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 10:19:51.477611       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:19:51.718886       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:19:51.881409       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:19:52.206646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:19:52.304002       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:19:52.382521       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:19:52.407177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:19:52.581620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.73.187"}
	I1108 10:19:52.622509       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.30.16"}
	I1108 10:19:54.595605       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:19:55.046509       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:19:55.194775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e54a15ae8d0d9494e154823af5fd404fa17a2360d71be54101b78e495105cdde] <==
	I1108 10:19:54.612190       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:19:54.619297       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:19:54.619325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:19:54.619332       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:19:54.621378       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:19:54.623669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:19:54.626666       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:19:54.630380       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:19:54.633070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:19:54.636426       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:19:54.636451       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:19:54.639442       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:19:54.639960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:19:54.640005       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:19:54.642416       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:19:54.642543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:19:54.642586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:19:54.642615       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:19:54.642637       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:19:54.643922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:19:54.643982       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:19:54.644546       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:19:54.652022       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:19:54.659254       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:19:54.665546       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [271859b9bb8f7fde968f835a6d30acf5b732b399755af466ff2f2b36bad2c1f1] <==
	I1108 10:19:52.450726       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:19:52.670554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:19:52.773003       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:19:52.773139       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:19:52.776494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:19:53.035511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:19:53.035639       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:19:53.058601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:19:53.059010       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:19:53.059331       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:19:53.060655       1 config.go:200] "Starting service config controller"
	I1108 10:19:53.060759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:19:53.060807       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:19:53.060856       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:19:53.060903       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:19:53.060965       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:19:53.061853       1 config.go:309] "Starting node config controller"
	I1108 10:19:53.061912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:19:53.061959       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:19:53.161452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:19:53.161495       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:19:53.161532       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [373eb15cdaeec71fc8c259392860af8991d890a3ddae1247397a32b21bd3f13a] <==
	I1108 10:19:51.017763       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:19:54.262174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:19:54.262280       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:19:54.276828       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:19:54.277047       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:19:54.277840       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:19:54.281241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:19:54.281953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:19:54.281711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.282074       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.277379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:19:54.377991       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:19:54.383118       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:19:54.383189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:19:46 newest-cni-330758 kubelet[727]: E1108 10:19:46.885807     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-330758\" not found" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.060981     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431792     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431913     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.431956     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.434537     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.481814     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-330758\" already exists" pod="kube-system/kube-apiserver-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.481861     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.495361     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-330758\" already exists" pod="kube-system/kube-controller-manager-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.495413     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.507687     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-330758\" already exists" pod="kube-system/kube-scheduler-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.507744     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: E1108 10:19:51.526594     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-330758\" already exists" pod="kube-system/etcd-newest-cni-330758"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.644181     727 apiserver.go:52] "Watching apiserver"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.660822     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.673824     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-lib-modules\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674019     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-xtables-lock\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674134     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c81513fd-e2c2-4e11-a842-c8ae0ceaed28-xtables-lock\") pod \"kube-proxy-hzls4\" (UID: \"c81513fd-e2c2-4e11-a842-c8ae0ceaed28\") " pod="kube-system/kube-proxy-hzls4"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674221     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-cni-cfg\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.674292     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c14e613a-b33c-4bde-9cd9-0bf775170ccf-lib-modules\") pod \"kindnet-2cmcs\" (UID: \"c14e613a-b33c-4bde-9cd9-0bf775170ccf\") " pod="kube-system/kindnet-2cmcs"
	Nov 08 10:19:51 newest-cni-330758 kubelet[727]: I1108 10:19:51.769141     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:19:52 newest-cni-330758 kubelet[727]: W1108 10:19:52.008336     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ffe9198584b5d9f3b9fd8021e43b950c547cc4e2199d6ce76e47bb3c9085f55/crio-755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3 WatchSource:0}: Error finding container 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3: Status 404 returned error can't find the container with id 755a5e9239c6969047177992f13292766ad6d0c8a0834d06c821ba0bcf0a31e3
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:19:54 newest-cni-330758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-330758 -n newest-cni-330758
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-330758 -n newest-cni-330758: exit status 2 (710.299371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-330758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64: exit status 1 (103.330609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4zq2p" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-k5mfh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-75h64" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-330758 describe pod coredns-66bc5c9577-4zq2p storage-provisioner dashboard-metrics-scraper-6ffb444bf9-k5mfh kubernetes-dashboard-855c9754f9-75h64: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-689864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-689864 --alsologtostderr -v=1: exit status 80 (2.50992322s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-689864 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:21:01.025839  507585 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:21:01.026000  507585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:21:01.026031  507585 out.go:374] Setting ErrFile to fd 2...
	I1108 10:21:01.026054  507585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:21:01.026323  507585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:21:01.026628  507585 out.go:368] Setting JSON to false
	I1108 10:21:01.026685  507585 mustload.go:66] Loading cluster: default-k8s-diff-port-689864
	I1108 10:21:01.027101  507585 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:21:01.027729  507585 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:21:01.045717  507585 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:21:01.046040  507585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:21:01.115464  507585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:21:01.105084201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:21:01.116144  507585 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-689864 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:21:01.119605  507585 out.go:179] * Pausing node default-k8s-diff-port-689864 ... 
	I1108 10:21:01.122414  507585 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:21:01.122825  507585 ssh_runner.go:195] Run: systemctl --version
	I1108 10:21:01.122888  507585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:21:01.143671  507585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:21:01.252047  507585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:21:01.278786  507585 pause.go:52] kubelet running: true
	I1108 10:21:01.278854  507585 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:21:01.545862  507585 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:21:01.545951  507585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:21:01.620080  507585 cri.go:89] found id: "10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6"
	I1108 10:21:01.620105  507585 cri.go:89] found id: "5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734"
	I1108 10:21:01.620115  507585 cri.go:89] found id: "0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648"
	I1108 10:21:01.620119  507585 cri.go:89] found id: "762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f"
	I1108 10:21:01.620122  507585 cri.go:89] found id: "f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	I1108 10:21:01.620126  507585 cri.go:89] found id: "3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2"
	I1108 10:21:01.620131  507585 cri.go:89] found id: "7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e"
	I1108 10:21:01.620134  507585 cri.go:89] found id: "4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe"
	I1108 10:21:01.620137  507585 cri.go:89] found id: "0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde"
	I1108 10:21:01.620148  507585 cri.go:89] found id: "bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	I1108 10:21:01.620151  507585 cri.go:89] found id: "acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d"
	I1108 10:21:01.620154  507585 cri.go:89] found id: ""
	I1108 10:21:01.620209  507585 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:21:01.631668  507585 retry.go:31] will retry after 324.157131ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:21:01Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:21:01.956086  507585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:21:01.969663  507585 pause.go:52] kubelet running: false
	I1108 10:21:01.969752  507585 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:21:02.194637  507585 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:21:02.194716  507585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:21:02.266019  507585 cri.go:89] found id: "10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6"
	I1108 10:21:02.266085  507585 cri.go:89] found id: "5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734"
	I1108 10:21:02.266117  507585 cri.go:89] found id: "0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648"
	I1108 10:21:02.266135  507585 cri.go:89] found id: "762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f"
	I1108 10:21:02.266169  507585 cri.go:89] found id: "f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	I1108 10:21:02.266193  507585 cri.go:89] found id: "3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2"
	I1108 10:21:02.266212  507585 cri.go:89] found id: "7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e"
	I1108 10:21:02.266231  507585 cri.go:89] found id: "4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe"
	I1108 10:21:02.266265  507585 cri.go:89] found id: "0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde"
	I1108 10:21:02.266287  507585 cri.go:89] found id: "bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	I1108 10:21:02.266305  507585 cri.go:89] found id: "acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d"
	I1108 10:21:02.266337  507585 cri.go:89] found id: ""
	I1108 10:21:02.266420  507585 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:21:02.280373  507585 retry.go:31] will retry after 188.928764ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:21:02Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:21:02.469923  507585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:21:02.485478  507585 pause.go:52] kubelet running: false
	I1108 10:21:02.485608  507585 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:21:02.671688  507585 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:21:02.671764  507585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:21:02.754073  507585 cri.go:89] found id: "10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6"
	I1108 10:21:02.754096  507585 cri.go:89] found id: "5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734"
	I1108 10:21:02.754102  507585 cri.go:89] found id: "0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648"
	I1108 10:21:02.754106  507585 cri.go:89] found id: "762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f"
	I1108 10:21:02.754110  507585 cri.go:89] found id: "f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	I1108 10:21:02.754114  507585 cri.go:89] found id: "3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2"
	I1108 10:21:02.754117  507585 cri.go:89] found id: "7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e"
	I1108 10:21:02.754120  507585 cri.go:89] found id: "4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe"
	I1108 10:21:02.754124  507585 cri.go:89] found id: "0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde"
	I1108 10:21:02.754140  507585 cri.go:89] found id: "bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	I1108 10:21:02.754147  507585 cri.go:89] found id: "acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d"
	I1108 10:21:02.754151  507585 cri.go:89] found id: ""
	I1108 10:21:02.754201  507585 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:21:02.765281  507585 retry.go:31] will retry after 399.254438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:21:02Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:21:03.164775  507585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:21:03.178060  507585 pause.go:52] kubelet running: false
	I1108 10:21:03.178126  507585 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:21:03.361353  507585 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:21:03.361477  507585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:21:03.437813  507585 cri.go:89] found id: "10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6"
	I1108 10:21:03.437889  507585 cri.go:89] found id: "5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734"
	I1108 10:21:03.437902  507585 cri.go:89] found id: "0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648"
	I1108 10:21:03.437907  507585 cri.go:89] found id: "762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f"
	I1108 10:21:03.437916  507585 cri.go:89] found id: "f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	I1108 10:21:03.437920  507585 cri.go:89] found id: "3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2"
	I1108 10:21:03.437923  507585 cri.go:89] found id: "7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e"
	I1108 10:21:03.437926  507585 cri.go:89] found id: "4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe"
	I1108 10:21:03.437929  507585 cri.go:89] found id: "0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde"
	I1108 10:21:03.437935  507585 cri.go:89] found id: "bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	I1108 10:21:03.437938  507585 cri.go:89] found id: "acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d"
	I1108 10:21:03.437941  507585 cri.go:89] found id: ""
	I1108 10:21:03.438002  507585 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:21:03.452876  507585 out.go:203] 
	W1108 10:21:03.455857  507585 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:21:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:21:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:21:03.455876  507585 out.go:285] * 
	* 
	W1108 10:21:03.462953  507585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:21:03.467889  507585 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-689864 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-689864
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-689864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	        "Created": "2025-11-08T10:18:18.537571387Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502445,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:19:58.389892624Z",
	            "FinishedAt": "2025-11-08T10:19:57.267414744Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hostname",
	        "HostsPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hosts",
	        "LogPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f-json.log",
	        "Name": "/default-k8s-diff-port-689864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-689864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-689864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	                "LowerDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-689864",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-689864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-689864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41193e97f555cd4f6313bd0889053da7842e2d5bbb221bbaa247fc398183d460",
	            "SandboxKey": "/var/run/docker/netns/41193e97f555",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-689864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:eb:fe:cd:d9:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d632f4190a5769bf708ccc9b7017dc54cf240a895d92fa0248d238a968a6188d",
	                    "EndpointID": "c1f4fccbf2d5c1301b5d9c7300ff155cb0ae9fc5794c9bf653d3e07b3595537a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-689864",
	                        "48dfdc9a3efb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864: exit status 2 (382.62861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25: (1.366210358s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p newest-cni-330758 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-689864 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ newest-cni-330758 image list --format=json                                                                                                                                                                                                    │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-330758 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-689864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:20 UTC │
	│ delete  │ -p newest-cni-330758                                                                                                                                                                                                                          │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │ 08 Nov 25 10:20 UTC │
	│ delete  │ -p newest-cni-330758                                                                                                                                                                                                                          │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │ 08 Nov 25 10:20 UTC │
	│ start   │ -p auto-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-099098                  │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │                     │
	│ image   │ default-k8s-diff-port-689864 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:21 UTC │ 08 Nov 25 10:21 UTC │
	│ pause   │ -p default-k8s-diff-port-689864 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:20:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:20:03.334978  503626 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:20:03.335191  503626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:20:03.335218  503626 out.go:374] Setting ErrFile to fd 2...
	I1108 10:20:03.335239  503626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:20:03.335532  503626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:20:03.335982  503626 out.go:368] Setting JSON to false
	I1108 10:20:03.337152  503626 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10953,"bootTime":1762586251,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:20:03.337296  503626 start.go:143] virtualization:  
	I1108 10:20:03.341174  503626 out.go:179] * [auto-099098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:20:03.345480  503626 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:20:03.345573  503626 notify.go:221] Checking for updates...
	I1108 10:20:03.352884  503626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:20:03.355965  503626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:03.359036  503626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:20:03.366697  503626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:20:03.369723  503626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:20:03.373310  503626 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:03.373486  503626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:20:03.412059  503626 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:20:03.412187  503626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:20:03.510448  503626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:20:03.498608785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:20:03.510560  503626 docker.go:319] overlay module found
	I1108 10:20:03.513796  503626 out.go:179] * Using the docker driver based on user configuration
	I1108 10:20:03.516715  503626 start.go:309] selected driver: docker
	I1108 10:20:03.516736  503626 start.go:930] validating driver "docker" against <nil>
	I1108 10:20:03.516751  503626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:20:03.517506  503626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:20:03.606277  503626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:20:03.596758391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:20:03.606433  503626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:20:03.606677  503626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:20:03.611160  503626 out.go:179] * Using Docker driver with root privileges
	I1108 10:20:03.615937  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:03.616011  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:03.616026  503626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:20:03.616113  503626 start.go:353] cluster config:
	{Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 10:20:03.620254  503626 out.go:179] * Starting "auto-099098" primary control-plane node in "auto-099098" cluster
	I1108 10:20:03.623935  503626 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:20:03.628255  503626 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:20:03.632360  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:03.632438  503626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:20:03.632449  503626 cache.go:59] Caching tarball of preloaded images
	I1108 10:20:03.632541  503626 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:20:03.632551  503626 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:20:03.632676  503626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json ...
	I1108 10:20:03.632695  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json: {Name:mk3367cfe879ea2688831c700b1d7b410e309342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:03.632831  503626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:20:03.657451  503626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:20:03.657476  503626 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:20:03.657489  503626 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:20:03.657512  503626 start.go:360] acquireMachinesLock for auto-099098: {Name:mk73ec5d6302742e62041fa375ebf76ab0a6f674 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:20:03.657608  503626 start.go:364] duration metric: took 76.005µs to acquireMachinesLock for "auto-099098"
	I1108 10:20:03.657638  503626 start.go:93] Provisioning new machine with config: &{Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:03.657720  503626 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:20:03.033952  502245 provision.go:177] copyRemoteCerts
	I1108 10:20:03.034022  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:20:03.034071  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.070222  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.181014  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:20:03.202252  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 10:20:03.223412  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:20:03.244707  502245 provision.go:87] duration metric: took 848.168118ms to configureAuth
	I1108 10:20:03.244735  502245 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:20:03.245023  502245 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:03.245145  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.272854  502245 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:03.273232  502245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1108 10:20:03.273257  502245 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:20:03.683275  502245 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:20:03.683310  502245 machine.go:97] duration metric: took 4.920245872s to provisionDockerMachine
	I1108 10:20:03.683320  502245 start.go:293] postStartSetup for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:20:03.683331  502245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:20:03.683378  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:20:03.683426  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.715288  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.833008  502245 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:20:03.838115  502245 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:20:03.838142  502245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:20:03.838153  502245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:20:03.838207  502245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:20:03.838304  502245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:20:03.838411  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:20:03.848695  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:03.875359  502245 start.go:296] duration metric: took 192.023296ms for postStartSetup
	I1108 10:20:03.875437  502245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:20:03.875476  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.899426  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.008490  502245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:20:04.013963  502245 fix.go:56] duration metric: took 5.69798708s for fixHost
	I1108 10:20:04.013986  502245 start.go:83] releasing machines lock for "default-k8s-diff-port-689864", held for 5.698035859s
	I1108 10:20:04.014065  502245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:20:04.044807  502245 ssh_runner.go:195] Run: cat /version.json
	I1108 10:20:04.044880  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:04.045154  502245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:20:04.045221  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:04.087302  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.101287  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.305360  502245 ssh_runner.go:195] Run: systemctl --version
	I1108 10:20:04.312189  502245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:20:04.368499  502245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:20:04.373392  502245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:20:04.373539  502245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:20:04.402227  502245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:20:04.402254  502245 start.go:496] detecting cgroup driver to use...
	I1108 10:20:04.402309  502245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:20:04.402421  502245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:20:04.427371  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:20:04.451711  502245 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:20:04.451784  502245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:20:04.469062  502245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:20:04.483147  502245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:20:04.676214  502245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:20:04.837264  502245 docker.go:234] disabling docker service ...
	I1108 10:20:04.837332  502245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:20:04.856182  502245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:20:04.880217  502245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:20:05.146158  502245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:20:05.297722  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:20:05.316812  502245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:20:05.333675  502245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:20:05.333750  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.343677  502245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:20:05.343744  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.353575  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.362998  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.372686  502245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:20:05.381532  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.391275  502245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.403333  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.413162  502245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:20:05.421824  502245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:20:05.430161  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:05.581285  502245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:20:06.265951  502245 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:20:06.266022  502245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:20:06.271546  502245 start.go:564] Will wait 60s for crictl version
	I1108 10:20:06.271615  502245 ssh_runner.go:195] Run: which crictl
	I1108 10:20:06.275477  502245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:20:06.315266  502245 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:20:06.315365  502245 ssh_runner.go:195] Run: crio --version
	I1108 10:20:06.350245  502245 ssh_runner.go:195] Run: crio --version
	I1108 10:20:06.392314  502245 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:20:06.395568  502245 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:06.410963  502245 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:20:06.415566  502245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:06.426437  502245 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:20:06.426543  502245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:06.426612  502245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:06.467426  502245 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:06.467445  502245 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:20:06.467500  502245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:06.496613  502245 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:06.496692  502245 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:20:06.496716  502245 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1108 10:20:06.496861  502245 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-689864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:20:06.496991  502245 ssh_runner.go:195] Run: crio config
	I1108 10:20:06.580299  502245 cni.go:84] Creating CNI manager for ""
	I1108 10:20:06.580370  502245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:06.580408  502245 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:20:06.580461  502245 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-689864 NodeName:default-k8s-diff-port-689864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:20:06.580660  502245 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-689864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:20:06.580781  502245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:20:06.591327  502245 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:20:06.591448  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:20:06.603106  502245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 10:20:06.617873  502245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:20:06.632265  502245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1108 10:20:06.646981  502245 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:20:06.651166  502245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:06.661853  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:06.813129  502245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:06.835862  502245 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864 for IP: 192.168.85.2
	I1108 10:20:06.835950  502245 certs.go:195] generating shared ca certs ...
	I1108 10:20:06.835984  502245 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:06.836162  502245 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:20:06.836243  502245 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:20:06.836281  502245 certs.go:257] generating profile certs ...
	I1108 10:20:06.836424  502245 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key
	I1108 10:20:06.836546  502245 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4
	I1108 10:20:06.836630  502245 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key
	I1108 10:20:06.836796  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:20:06.836860  502245 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:20:06.836885  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:20:06.837029  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:20:06.837098  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:20:06.837163  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:20:06.837257  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:06.838154  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:20:06.860006  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:20:06.880217  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:20:06.908534  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:20:06.928482  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 10:20:06.949867  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:20:06.970374  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:20:06.990339  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:20:07.013343  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:20:07.034264  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:20:07.053891  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:20:07.076934  502245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:20:07.092733  502245 ssh_runner.go:195] Run: openssl version
	I1108 10:20:07.104475  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:20:07.119974  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.127899  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.128016  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.233360  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:20:07.249497  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:20:07.276144  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.281754  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.281881  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.329006  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:20:07.338027  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:20:07.347279  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.351938  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.352056  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.395582  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:20:07.404869  502245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:20:07.409547  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:20:07.451479  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:20:07.496750  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:20:07.538899  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:20:07.581177  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:20:07.623182  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:20:07.669414  502245 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:20:07.669573  502245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:20:07.669669  502245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:20:07.767625  502245 cri.go:89] found id: ""
	I1108 10:20:07.767744  502245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:20:07.782953  502245 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:20:07.782969  502245 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:20:07.783026  502245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:20:07.798193  502245 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:20:07.798643  502245 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-689864" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:07.798789  502245 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-689864" cluster setting kubeconfig missing "default-k8s-diff-port-689864" context setting]
	I1108 10:20:07.799106  502245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.800511  502245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:20:07.818954  502245 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:20:07.818987  502245 kubeadm.go:602] duration metric: took 36.01178ms to restartPrimaryControlPlane
	I1108 10:20:07.818996  502245 kubeadm.go:403] duration metric: took 149.60455ms to StartCluster
	I1108 10:20:07.819031  502245 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.819114  502245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:07.819769  502245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.820080  502245 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:07.820369  502245 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:07.820412  502245 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:20:07.820472  502245 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.820486  502245 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.820492  502245 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:20:07.820512  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.820844  502245 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.820868  502245 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.820875  502245 addons.go:248] addon dashboard should already be in state true
	I1108 10:20:07.820894  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.821417  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.821815  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.824992  502245 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.825026  502245 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-689864"
	I1108 10:20:07.825332  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.846574  502245 out.go:179] * Verifying Kubernetes components...
	I1108 10:20:07.858091  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:07.870106  502245 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:20:07.878103  502245 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:07.878127  502245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:20:07.878194  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.886854  502245 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.886877  502245 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:20:07.886903  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.893094  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.912987  502245 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:20:07.925202  502245 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:20:07.925587  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:07.932425  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:20:07.932452  502245 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:20:07.932527  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.934805  502245 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:07.934838  502245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:20:07.934898  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.995317  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:07.999954  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.663388  503626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:20:03.663635  503626 start.go:159] libmachine.API.Create for "auto-099098" (driver="docker")
	I1108 10:20:03.663675  503626 client.go:173] LocalClient.Create starting
	I1108 10:20:03.663738  503626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:20:03.663778  503626 main.go:143] libmachine: Decoding PEM data...
	I1108 10:20:03.663796  503626 main.go:143] libmachine: Parsing certificate...
	I1108 10:20:03.663851  503626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:20:03.663876  503626 main.go:143] libmachine: Decoding PEM data...
	I1108 10:20:03.663892  503626 main.go:143] libmachine: Parsing certificate...
	I1108 10:20:03.664266  503626 cli_runner.go:164] Run: docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:20:03.682980  503626 cli_runner.go:211] docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:20:03.683068  503626 network_create.go:284] running [docker network inspect auto-099098] to gather additional debugging logs...
	I1108 10:20:03.683085  503626 cli_runner.go:164] Run: docker network inspect auto-099098
	W1108 10:20:03.709405  503626 cli_runner.go:211] docker network inspect auto-099098 returned with exit code 1
	I1108 10:20:03.709435  503626 network_create.go:287] error running [docker network inspect auto-099098]: docker network inspect auto-099098: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-099098 not found
	I1108 10:20:03.709449  503626 network_create.go:289] output of [docker network inspect auto-099098]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-099098 not found
	
	** /stderr **
	I1108 10:20:03.709658  503626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:03.742983  503626 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:20:03.743372  503626 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:20:03.743598  503626 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:20:03.744010  503626 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd140}
	I1108 10:20:03.744028  503626 network_create.go:124] attempt to create docker network auto-099098 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:20:03.744081  503626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-099098 auto-099098
	I1108 10:20:03.812568  503626 network_create.go:108] docker network auto-099098 192.168.76.0/24 created
	I1108 10:20:03.812596  503626 kic.go:121] calculated static IP "192.168.76.2" for the "auto-099098" container
	I1108 10:20:03.812683  503626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:20:03.832256  503626 cli_runner.go:164] Run: docker volume create auto-099098 --label name.minikube.sigs.k8s.io=auto-099098 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:20:03.857724  503626 oci.go:103] Successfully created a docker volume auto-099098
	I1108 10:20:03.857821  503626 cli_runner.go:164] Run: docker run --rm --name auto-099098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-099098 --entrypoint /usr/bin/test -v auto-099098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:20:04.505300  503626 oci.go:107] Successfully prepared a docker volume auto-099098
	I1108 10:20:04.505361  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:04.505380  503626 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:20:04.505452  503626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-099098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:20:08.181044  502245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:08.267359  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:08.283766  502245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:20:08.321334  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:08.443988  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:20:08.444054  502245 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:20:08.520501  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:20:08.520566  502245 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:20:08.578616  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:20:08.578683  502245 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:20:08.606281  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:20:08.606347  502245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:20:08.625989  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:20:08.626057  502245 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:20:08.648848  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:20:08.648944  502245 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:20:08.677963  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:20:08.678037  502245 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:20:08.705383  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:20:08.705459  502245 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:20:08.739392  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:20:08.739422  502245 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:20:08.795876  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:20:08.721238  503626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-099098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215740962s)
	I1108 10:20:08.721268  503626 kic.go:203] duration metric: took 4.215883848s to extract preloaded images to volume ...
	W1108 10:20:08.721407  503626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:20:08.721517  503626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:20:08.829633  503626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-099098 --name auto-099098 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-099098 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-099098 --network auto-099098 --ip 192.168.76.2 --volume auto-099098:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:20:09.311763  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Running}}
	I1108 10:20:09.339427  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:09.370383  503626 cli_runner.go:164] Run: docker exec auto-099098 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:20:09.429247  503626 oci.go:144] the created container "auto-099098" has a running status.
	I1108 10:20:09.429279  503626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa...
	I1108 10:20:10.450434  503626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:20:10.474336  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:10.497961  503626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:20:10.497980  503626 kic_runner.go:114] Args: [docker exec --privileged auto-099098 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:20:10.570189  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:10.598564  503626 machine.go:94] provisionDockerMachine start ...
	I1108 10:20:10.598674  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:10.631274  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:10.631609  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:10.631618  503626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:20:10.633148  503626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:20:14.086145  502245 node_ready.go:49] node "default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:14.086174  502245 node_ready.go:38] duration metric: took 5.802372364s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:20:14.086190  502245 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:20:14.086267  502245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:20:14.389186  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.121796936s)
	I1108 10:20:16.676544  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.355132783s)
	I1108 10:20:16.676672  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.880769154s)
	I1108 10:20:16.676841  502245 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.590561836s)
	I1108 10:20:16.676859  502245 api_server.go:72] duration metric: took 8.856751379s to wait for apiserver process to appear ...
	I1108 10:20:16.676866  502245 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:20:16.676881  502245 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:20:16.679761  502245 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-689864 addons enable metrics-server
	
	I1108 10:20:16.682636  502245 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1108 10:20:13.824712  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-099098
	
	I1108 10:20:13.824738  503626 ubuntu.go:182] provisioning hostname "auto-099098"
	I1108 10:20:13.824836  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:13.854478  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:13.854844  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:13.854857  503626 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-099098 && echo "auto-099098" | sudo tee /etc/hostname
	I1108 10:20:14.042582  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-099098
	
	I1108 10:20:14.042741  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:14.070136  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:14.070442  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:14.070459  503626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-099098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-099098/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-099098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:20:14.273730  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:20:14.273755  503626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:20:14.273778  503626 ubuntu.go:190] setting up certificates
	I1108 10:20:14.273788  503626 provision.go:84] configureAuth start
	I1108 10:20:14.273848  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:14.311095  503626 provision.go:143] copyHostCerts
	I1108 10:20:14.311162  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:20:14.311171  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:20:14.311248  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:20:14.311340  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:20:14.311346  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:20:14.311371  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:20:14.311418  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:20:14.311427  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:20:14.311450  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:20:14.311496  503626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.auto-099098 san=[127.0.0.1 192.168.76.2 auto-099098 localhost minikube]
	I1108 10:20:14.830254  503626 provision.go:177] copyRemoteCerts
	I1108 10:20:14.830439  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:20:14.830512  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:14.850771  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:14.968314  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:20:15.007561  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:20:15.046388  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:20:15.079360  503626 provision.go:87] duration metric: took 805.547891ms to configureAuth
	I1108 10:20:15.079392  503626 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:20:15.079587  503626 config.go:182] Loaded profile config "auto-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:15.079701  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.107118  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:15.107449  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:15.107471  503626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:20:15.509241  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:20:15.509340  503626 machine.go:97] duration metric: took 4.910740836s to provisionDockerMachine
	I1108 10:20:15.509366  503626 client.go:176] duration metric: took 11.845678989s to LocalClient.Create
	I1108 10:20:15.509413  503626 start.go:167] duration metric: took 11.845777846s to libmachine.API.Create "auto-099098"
	I1108 10:20:15.509442  503626 start.go:293] postStartSetup for "auto-099098" (driver="docker")
	I1108 10:20:15.509467  503626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:20:15.509561  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:20:15.509625  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.538177  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.663556  503626 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:20:15.667820  503626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:20:15.667861  503626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:20:15.667873  503626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:20:15.667926  503626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:20:15.668012  503626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:20:15.668127  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:20:15.683314  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:15.706228  503626 start.go:296] duration metric: took 196.756937ms for postStartSetup
	I1108 10:20:15.706641  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:15.738669  503626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json ...
	I1108 10:20:15.738944  503626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:20:15.739005  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.773129  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.896090  503626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:20:15.902893  503626 start.go:128] duration metric: took 12.245155193s to createHost
	I1108 10:20:15.902923  503626 start.go:83] releasing machines lock for "auto-099098", held for 12.24529963s
	I1108 10:20:15.903002  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:15.931046  503626 ssh_runner.go:195] Run: cat /version.json
	I1108 10:20:15.931103  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.931355  503626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:20:15.931424  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.967528  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.969690  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:16.093889  503626 ssh_runner.go:195] Run: systemctl --version
	I1108 10:20:16.225003  503626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:20:16.295304  503626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:20:16.301608  503626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:20:16.301690  503626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:20:16.341909  503626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:20:16.341945  503626 start.go:496] detecting cgroup driver to use...
	I1108 10:20:16.341981  503626 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:20:16.342046  503626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:20:16.372527  503626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:20:16.386264  503626 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:20:16.386337  503626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:20:16.405731  503626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:20:16.427506  503626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:20:16.619498  503626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:20:16.807453  503626 docker.go:234] disabling docker service ...
	I1108 10:20:16.807535  503626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:20:16.844101  503626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:20:16.867032  503626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:20:17.064545  503626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:20:17.204250  503626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:20:17.223019  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:20:17.243229  503626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:20:17.243296  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.254468  503626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:20:17.254549  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.266723  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.284442  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.296771  503626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:20:17.306431  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.317301  503626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.336763  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.347096  503626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:20:17.359733  503626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:20:17.372601  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:17.501015  503626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:20:17.647190  503626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:20:17.647302  503626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:20:17.652866  503626 start.go:564] Will wait 60s for crictl version
	I1108 10:20:17.652955  503626 ssh_runner.go:195] Run: which crictl
	I1108 10:20:17.661695  503626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:20:17.703389  503626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:20:17.703514  503626 ssh_runner.go:195] Run: crio --version
	I1108 10:20:17.749859  503626 ssh_runner.go:195] Run: crio --version
	I1108 10:20:17.791856  503626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:20:16.685508  502245 addons.go:515] duration metric: took 8.865081216s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1108 10:20:16.702162  502245 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:20:16.702190  502245 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:20:17.177811  502245 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:20:17.186732  502245 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1108 10:20:17.188237  502245 api_server.go:141] control plane version: v1.34.1
	I1108 10:20:17.188258  502245 api_server.go:131] duration metric: took 511.386318ms to wait for apiserver health ...
	I1108 10:20:17.188267  502245 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:20:17.192608  502245 system_pods.go:59] 8 kube-system pods found
	I1108 10:20:17.192657  502245 system_pods.go:61] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:20:17.192666  502245 system_pods.go:61] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:20:17.192673  502245 system_pods.go:61] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:20:17.192679  502245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:20:17.192686  502245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:20:17.192691  502245 system_pods.go:61] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:20:17.192706  502245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:20:17.192711  502245 system_pods.go:61] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Running
	I1108 10:20:17.192718  502245 system_pods.go:74] duration metric: took 4.444802ms to wait for pod list to return data ...
	I1108 10:20:17.192727  502245 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:20:17.195504  502245 default_sa.go:45] found service account: "default"
	I1108 10:20:17.195525  502245 default_sa.go:55] duration metric: took 2.792396ms for default service account to be created ...
	I1108 10:20:17.195535  502245 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:20:17.199571  502245 system_pods.go:86] 8 kube-system pods found
	I1108 10:20:17.199603  502245 system_pods.go:89] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:20:17.199613  502245 system_pods.go:89] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:20:17.199618  502245 system_pods.go:89] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:20:17.199626  502245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:20:17.199633  502245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:20:17.199639  502245 system_pods.go:89] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:20:17.199646  502245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:20:17.199650  502245 system_pods.go:89] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Running
	I1108 10:20:17.199657  502245 system_pods.go:126] duration metric: took 4.117013ms to wait for k8s-apps to be running ...
	I1108 10:20:17.199665  502245 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:20:17.199724  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:20:17.219717  502245 system_svc.go:56] duration metric: took 20.04187ms WaitForService to wait for kubelet
	I1108 10:20:17.219747  502245 kubeadm.go:587] duration metric: took 9.399638248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:20:17.219767  502245 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:20:17.223773  502245 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:20:17.223817  502245 node_conditions.go:123] node cpu capacity is 2
	I1108 10:20:17.223841  502245 node_conditions.go:105] duration metric: took 4.056442ms to run NodePressure ...
	I1108 10:20:17.223855  502245 start.go:242] waiting for startup goroutines ...
	I1108 10:20:17.223866  502245 start.go:247] waiting for cluster config update ...
	I1108 10:20:17.223878  502245 start.go:256] writing updated cluster config ...
	I1108 10:20:17.224246  502245 ssh_runner.go:195] Run: rm -f paused
	I1108 10:20:17.228990  502245 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:20:17.233564  502245 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:17.794983  503626 cli_runner.go:164] Run: docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:17.813597  503626 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:20:17.819181  503626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:17.834209  503626 kubeadm.go:884] updating cluster {Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:20:17.834322  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:17.834381  503626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:17.873820  503626 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:17.873842  503626 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:20:17.873909  503626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:17.905182  503626 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:17.905205  503626 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:20:17.905214  503626 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:20:17.905314  503626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-099098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:20:17.905408  503626 ssh_runner.go:195] Run: crio config
	I1108 10:20:17.984042  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:17.984068  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:17.984087  503626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:20:17.984112  503626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-099098 NodeName:auto-099098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:20:17.984245  503626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-099098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:20:17.984322  503626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:20:17.993714  503626 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:20:17.993794  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:20:18.005490  503626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1108 10:20:18.023843  503626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:20:18.039871  503626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 10:20:18.055370  503626 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:20:18.059569  503626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:18.070559  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:18.197572  503626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:18.215254  503626 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098 for IP: 192.168.76.2
	I1108 10:20:18.215277  503626 certs.go:195] generating shared ca certs ...
	I1108 10:20:18.215306  503626 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.215464  503626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:20:18.215527  503626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:20:18.215539  503626 certs.go:257] generating profile certs ...
	I1108 10:20:18.215595  503626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key
	I1108 10:20:18.215612  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt with IP's: []
	I1108 10:20:18.439389  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt ...
	I1108 10:20:18.439423  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: {Name:mk48f84091bb4f7ebb55d343a2c2dcfb7a96e7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.439629  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key ...
	I1108 10:20:18.439643  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key: {Name:mk5c72a5c1391c41a11543be183ce76064829017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.439739  503626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e
	I1108 10:20:18.439756  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:20:19.228036  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e ...
	I1108 10:20:19.228068  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e: {Name:mkac92b1c434de07c5fdf64afb851ccf96850720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.228266  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e ...
	I1108 10:20:19.228282  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e: {Name:mked99514682fd6af203cb6fa4464878356fc197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.228376  503626 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt
	I1108 10:20:19.228452  503626 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key
	I1108 10:20:19.228513  503626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key
	I1108 10:20:19.228529  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt with IP's: []
	I1108 10:20:19.695246  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt ...
	I1108 10:20:19.695277  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt: {Name:mk030e0fce32a980894cd8b2b0800997b2502b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.695494  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key ...
	I1108 10:20:19.695509  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key: {Name:mk1e1af3f347c6d806904a98a617f6d146be840e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.695712  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:20:19.695756  503626 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:20:19.695771  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:20:19.695797  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:20:19.695825  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:20:19.695852  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:20:19.695900  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:19.696505  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:20:19.723601  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:20:19.743323  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:20:19.760316  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:20:19.777777  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 10:20:19.795262  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:20:19.812878  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:20:19.830427  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:20:19.847902  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:20:19.865805  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:20:19.883997  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:20:19.901438  503626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:20:19.914690  503626 ssh_runner.go:195] Run: openssl version
	I1108 10:20:19.921880  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:20:19.930320  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.934797  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.934866  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.985911  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:20:20.004813  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:20:20.027551  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.033227  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.033296  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.093126  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:20:20.102339  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:20:20.111436  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.115804  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.115870  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.157075  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:20:20.165718  503626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:20:20.169825  503626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:20:20.169873  503626 kubeadm.go:401] StartCluster: {Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:20:20.169946  503626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:20:20.170016  503626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:20:20.197401  503626 cri.go:89] found id: ""
	I1108 10:20:20.197481  503626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:20:20.205095  503626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:20:20.212703  503626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:20:20.212830  503626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:20:20.220487  503626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:20:20.220509  503626 kubeadm.go:158] found existing configuration files:
	
	I1108 10:20:20.220591  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:20:20.228575  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:20:20.228683  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:20:20.238539  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:20:20.247222  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:20:20.247290  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:20:20.254670  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:20:20.262363  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:20:20.262474  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:20:20.270671  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:20:20.278535  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:20:20.278598  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:20:20.286143  503626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:20:20.330179  503626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:20:20.330395  503626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:20:20.353647  503626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:20:20.353794  503626 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:20:20.353868  503626 kubeadm.go:319] OS: Linux
	I1108 10:20:20.353958  503626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:20:20.354054  503626 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:20:20.354147  503626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:20:20.354255  503626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:20:20.354366  503626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:20:20.354439  503626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:20:20.354492  503626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:20:20.354548  503626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:20:20.354607  503626 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:20:20.433714  503626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:20:20.433911  503626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:20:20.434031  503626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:20:20.442804  503626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:20:19.249418  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:21.739563  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:20.448453  503626 out.go:252]   - Generating certificates and keys ...
	I1108 10:20:20.448582  503626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:20:20.448685  503626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:20:21.115442  503626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:20:21.650391  503626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:20:22.037090  503626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:20:22.532346  503626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1108 10:20:23.740927  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:26.239903  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:23.468653  503626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:20:23.468863  503626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-099098 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:20:24.045954  503626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:20:24.046298  503626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-099098 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:20:24.393137  503626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:20:25.955484  503626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:20:26.271157  503626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:20:26.271443  503626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:20:26.443424  503626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:20:26.957141  503626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:20:27.637013  503626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:20:29.176412  503626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:20:29.689906  503626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:20:29.690664  503626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:20:29.695036  503626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1108 10:20:28.247187  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:30.739446  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:32.740026  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:29.699179  503626 out.go:252]   - Booting up control plane ...
	I1108 10:20:29.699298  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:20:29.700736  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:20:29.706415  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:20:29.725244  503626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:20:29.725353  503626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:20:29.733360  503626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:20:29.733748  503626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:20:29.733796  503626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:20:29.923848  503626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:20:29.923982  503626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:20:31.425327  503626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501228832s
	I1108 10:20:31.428285  503626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:20:31.428386  503626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:20:31.428636  503626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:20:31.428720  503626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:20:34.754268  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:37.239532  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:35.848760  503626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.419783196s
	I1108 10:20:37.724591  503626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.295488814s
	I1108 10:20:39.429966  503626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.001415084s
	I1108 10:20:39.451340  503626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:20:39.468257  503626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:20:39.483937  503626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:20:39.485612  503626 kubeadm.go:319] [mark-control-plane] Marking the node auto-099098 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:20:39.501523  503626 kubeadm.go:319] [bootstrap-token] Using token: 99ar74.rb3xng62osk9vs1i
	I1108 10:20:39.504481  503626 out.go:252]   - Configuring RBAC rules ...
	I1108 10:20:39.504620  503626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:20:39.510090  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:20:39.519109  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:20:39.523922  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:20:39.528674  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:20:39.534968  503626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:20:39.839101  503626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:20:40.353273  503626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:20:40.837928  503626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:20:40.838883  503626 kubeadm.go:319] 
	I1108 10:20:40.838961  503626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:20:40.838973  503626 kubeadm.go:319] 
	I1108 10:20:40.839055  503626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:20:40.839063  503626 kubeadm.go:319] 
	I1108 10:20:40.839090  503626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:20:40.839174  503626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:20:40.839238  503626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:20:40.839244  503626 kubeadm.go:319] 
	I1108 10:20:40.839301  503626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:20:40.839305  503626 kubeadm.go:319] 
	I1108 10:20:40.839355  503626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:20:40.839360  503626 kubeadm.go:319] 
	I1108 10:20:40.839414  503626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:20:40.839493  503626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:20:40.839564  503626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:20:40.839569  503626 kubeadm.go:319] 
	I1108 10:20:40.839662  503626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:20:40.839742  503626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:20:40.839746  503626 kubeadm.go:319] 
	I1108 10:20:40.839833  503626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 99ar74.rb3xng62osk9vs1i \
	I1108 10:20:40.839941  503626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:20:40.839962  503626 kubeadm.go:319] 	--control-plane 
	I1108 10:20:40.839968  503626 kubeadm.go:319] 
	I1108 10:20:40.840056  503626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:20:40.840060  503626 kubeadm.go:319] 
	I1108 10:20:40.840146  503626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 99ar74.rb3xng62osk9vs1i \
	I1108 10:20:40.840546  503626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:20:40.845539  503626 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:20:40.845807  503626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:20:40.845946  503626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:20:40.845976  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:40.845989  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:40.851100  503626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 10:20:39.739883  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:41.740011  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:40.854166  503626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:20:40.859439  503626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:20:40.859459  503626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:20:40.874325  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:20:41.172257  503626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:20:41.172387  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:41.172467  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-099098 minikube.k8s.io/updated_at=2025_11_08T10_20_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=auto-099098 minikube.k8s.io/primary=true
	I1108 10:20:41.366898  503626 ops.go:34] apiserver oom_adj: -16
	I1108 10:20:41.366922  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:41.867693  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:42.367790  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:42.867025  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:43.367194  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:43.867975  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:44.366985  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:44.867477  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:45.367339  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:45.517931  503626 kubeadm.go:1114] duration metric: took 4.345589369s to wait for elevateKubeSystemPrivileges
	I1108 10:20:45.517957  503626 kubeadm.go:403] duration metric: took 25.348085992s to StartCluster
	I1108 10:20:45.517974  503626 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:45.518048  503626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:45.519089  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:45.519328  503626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:45.519417  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:20:45.519677  503626 config.go:182] Loaded profile config "auto-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:45.519716  503626 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:20:45.519775  503626 addons.go:70] Setting storage-provisioner=true in profile "auto-099098"
	I1108 10:20:45.519788  503626 addons.go:239] Setting addon storage-provisioner=true in "auto-099098"
	I1108 10:20:45.519808  503626 host.go:66] Checking if "auto-099098" exists ...
	I1108 10:20:45.520532  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.521004  503626 addons.go:70] Setting default-storageclass=true in profile "auto-099098"
	I1108 10:20:45.521031  503626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-099098"
	I1108 10:20:45.521336  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.523685  503626 out.go:179] * Verifying Kubernetes components...
	I1108 10:20:45.529554  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:45.564140  503626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:20:45.564779  503626 addons.go:239] Setting addon default-storageclass=true in "auto-099098"
	I1108 10:20:45.564818  503626 host.go:66] Checking if "auto-099098" exists ...
	I1108 10:20:45.565430  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.567523  503626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:45.567543  503626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:20:45.567602  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:45.594513  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:45.609492  503626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:45.609513  503626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:20:45.609579  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:45.639524  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:45.929853  503626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:45.938943  503626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:45.965111  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:20:45.965227  503626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:47.000123  503626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061143746s)
	I1108 10:20:47.000327  503626 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.035077173s)
	I1108 10:20:47.000548  503626 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035369237s)
	I1108 10:20:47.000573  503626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:20:47.002665  503626 node_ready.go:35] waiting up to 15m0s for node "auto-099098" to be "Ready" ...
	I1108 10:20:47.004295  503626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1108 10:20:44.239463  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:46.240997  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:47.738788  502245 pod_ready.go:94] pod "coredns-66bc5c9577-5nhxx" is "Ready"
	I1108 10:20:47.738815  502245 pod_ready.go:86] duration metric: took 30.505173686s for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.741382  502245 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.745869  502245 pod_ready.go:94] pod "etcd-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.745897  502245 pod_ready.go:86] duration metric: took 4.491229ms for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.748220  502245 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.753342  502245 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.753421  502245 pod_ready.go:86] duration metric: took 5.175091ms for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.755717  502245 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.937087  502245 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.937115  502245 pod_ready.go:86] duration metric: took 181.3715ms for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.008303  503626 addons.go:515] duration metric: took 1.488566288s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 10:20:47.504388  503626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-099098" context rescaled to 1 replicas
	I1108 10:20:48.137702  502245 pod_ready.go:83] waiting for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:48.536435  502245 pod_ready.go:94] pod "kube-proxy-lcscg" is "Ready"
	I1108 10:20:48.536523  502245 pod_ready.go:86] duration metric: took 398.792922ms for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:48.737421  502245 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:49.137302  502245 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:49.137332  502245 pod_ready.go:86] duration metric: took 399.884225ms for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:49.137346  502245 pod_ready.go:40] duration metric: took 31.908270404s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:20:49.198305  502245 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:20:49.201379  502245 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-689864" cluster and "default" namespace by default
	W1108 10:20:49.005975  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:51.006361  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:53.007455  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:55.012894  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:57.506470  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:21:00.012902  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:21:02.505622  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 10:20:42 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:42.634008166Z" level=info msg="Removed container 40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65/dashboard-metrics-scraper" id=2503aea4-a24c-47b6-a667-2700bb25f982 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 conmon[1120]: conmon f4e51831398ac84ed173 <ninfo>: container 1123 exited with status 1
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.623332816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1650373c-3963-49ee-a0db-7c79380465f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.628219043Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a215dc2b-1075-4937-8da9-45f1d284969f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.629363957Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6483aadd-8c05-4d6e-b8d7-e662f789bb19 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.629492926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643316851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643555623Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dba6b595ab1636cba456183e2622e0662913f2851aada75e977b12165437892a/merged/etc/passwd: no such file or directory"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643582823Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dba6b595ab1636cba456183e2622e0662913f2851aada75e977b12165437892a/merged/etc/group: no such file or directory"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643929452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.679829109Z" level=info msg="Created container 10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6: kube-system/storage-provisioner/storage-provisioner" id=6483aadd-8c05-4d6e-b8d7-e662f789bb19 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.681413485Z" level=info msg="Starting container: 10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6" id=3bb60c86-3b7a-4bef-9fd0-9a3d02b37a7f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.684134793Z" level=info msg="Started container" PID=1637 containerID=10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6 description=kube-system/storage-provisioner/storage-provisioner id=3bb60c86-3b7a-4bef-9fd0-9a3d02b37a7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae79730b26f2003745703ef3eecb3e8c4fe3071ecf99dfa35f8c45159cad11c4
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.622230663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629553083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629590647Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629614007Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.632950631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.632987186Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.633008142Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.636178225Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.636211382Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.63623384Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.640037299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.640078194Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	10d2d6703c42d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   ae79730b26f20       storage-provisioner                                    kube-system
	bb8f6efdfd72d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   9537bbde8749a       dashboard-metrics-scraper-6ffb444bf9-fgx65             kubernetes-dashboard
	acb4867f3275e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   3b9585af6566a       kubernetes-dashboard-855c9754f9-j9bdq                  kubernetes-dashboard
	a3ddbd760444e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   ad860b7e30669       busybox                                                default
	5a084c94a897e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   886fcc207e406       coredns-66bc5c9577-5nhxx                               kube-system
	0455a60ba551b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   986f7ac3b098a       kindnet-c98xc                                          kube-system
	762f453d0ed14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   0c4c84de916cf       kube-proxy-lcscg                                       kube-system
	f4e51831398ac       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   ae79730b26f20       storage-provisioner                                    kube-system
	3c3f47aaf8c2b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   5e1de017e5a3d       kube-controller-manager-default-k8s-diff-port-689864   kube-system
	7c3023cf0ac48       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   9cd16a947f849       kube-apiserver-default-k8s-diff-port-689864            kube-system
	4b189591b949c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   54e9ef6890ab8       etcd-default-k8s-diff-port-689864                      kube-system
	0ae22b5caa485       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   dd541acd7adf7       kube-scheduler-default-k8s-diff-port-689864            kube-system
	
	
	==> coredns [5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56971 - 63809 "HINFO IN 377492404350755260.8445773308788893470. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012031412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-689864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-689864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-689864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_18_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:18:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-689864
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:19:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-689864
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                374121ba-37fd-4356-a88f-beebc6e065b5
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-5nhxx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-689864                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-c98xc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-689864             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-689864    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-lcscg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-689864             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fgx65              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j9bdq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m14s              kube-proxy       
	  Normal   Starting                 47s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m21s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m21s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m21s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s              node-controller  Node default-k8s-diff-port-689864 event: Registered Node default-k8s-diff-port-689864 in Controller
	  Normal   NodeReady                94s                kubelet          Node default-k8s-diff-port-689864 status is now: NodeReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           45s                node-controller  Node default-k8s-diff-port-689864 event: Registered Node default-k8s-diff-port-689864 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[ +26.370836] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[ +23.794161] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe] <==
	{"level":"warn","ts":"2025-11-08T10:20:12.077492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.102139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.126261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.145435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.157653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.179201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.201285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.229632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.233569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.256984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.273298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.285810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.302733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.319899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.340366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.356496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.381018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.397005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.414343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.429130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.486835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.512981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.528411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.551579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.602688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:04 up  3:03,  0 user,  load average: 5.66, 4.58, 3.28
	Linux default-k8s-diff-port-689864 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648] <==
	I1108 10:20:15.348157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:20:15.348439       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:20:15.349475       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:20:15.349524       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:20:15.349540       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:20:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:20:15.618306       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:20:15.618326       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:20:15.618334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:20:15.618646       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:20:45.625118       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:20:45.625300       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:20:45.625399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:20:45.625524       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:20:46.918519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:20:46.918550       1 metrics.go:72] Registering metrics
	I1108 10:20:46.918629       1 controller.go:711] "Syncing nftables rules"
	I1108 10:20:55.621915       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:20:55.621955       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e] <==
	I1108 10:20:14.431681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:20:14.468884       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:20:14.468993       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:20:14.469122       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:20:14.469168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:20:14.565424       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:20:14.565464       1 policy_source.go:240] refreshing policies
	I1108 10:20:14.572831       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:20:14.572851       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:20:14.572858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:20:14.572878       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:20:14.588038       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:20:14.588523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:20:14.632954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:20:14.839255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1108 10:20:14.874541       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:20:15.674836       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:20:15.948876       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:20:16.110515       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:20:16.170422       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:20:16.486913       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.126.65"}
	I1108 10:20:16.575796       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.186.239"}
	I1108 10:20:19.168600       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:20:19.303841       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:20:19.712597       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2] <==
	I1108 10:20:19.125890       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:20:19.125997       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:20:19.128213       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:20:19.128335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:20:19.130470       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:20:19.133819       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:20:19.139104       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:20:19.139196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:20:19.142969       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:20:19.143083       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:20:19.148313       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:20:19.152861       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:20:19.153147       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:20:19.153250       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:20:19.153359       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-689864"
	I1108 10:20:19.154576       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:20:19.154141       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:20:19.154515       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:20:19.154123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:20:19.154972       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:20:19.155009       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:20:19.161342       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:20:19.173496       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:20:19.767556       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1108 10:20:19.767904       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f] <==
	I1108 10:20:16.637154       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:20:16.973548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:20:17.080831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:20:17.081132       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:20:17.081225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:20:17.107880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:20:17.107936       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:20:17.117177       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:20:17.117520       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:20:17.117544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:20:17.118561       1 config.go:200] "Starting service config controller"
	I1108 10:20:17.118658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:20:17.131176       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:20:17.131269       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:20:17.131344       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:20:17.131376       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:20:17.132066       1 config.go:309] "Starting node config controller"
	I1108 10:20:17.137070       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:20:17.137167       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:20:17.219493       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:20:17.231846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:20:17.231881       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde] <==
	I1108 10:20:12.337022       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:20:16.825041       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:20:16.826888       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:20:16.846106       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:20:16.846210       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:20:16.846240       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:20:16.846265       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:20:16.848900       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.849364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.849229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:20:16.849547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:20:16.947214       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:20:16.949520       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.949592       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.672293     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgbq\" (UniqueName: \"kubernetes.io/projected/bd217ef4-1a9e-491c-9bef-24b5cf18d140-kube-api-access-psgbq\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgx65\" (UID: \"bd217ef4-1a9e-491c-9bef-24b5cf18d140\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.672392     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd217ef4-1a9e-491c-9bef-24b5cf18d140-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgx65\" (UID: \"bd217ef4-1a9e-491c-9bef-24b5cf18d140\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.773657     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j9bdq\" (UID: \"0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.773721     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2sb\" (UniqueName: \"kubernetes.io/projected/0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5-kube-api-access-4k2sb\") pod \"kubernetes-dashboard-855c9754f9-j9bdq\" (UID: \"0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq"
	Nov 08 10:20:20 default-k8s-diff-port-689864 kubelet[779]: W1108 10:20:20.591220     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c WatchSource:0}: Error finding container 9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c: Status 404 returned error can't find the container with id 9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c
	Nov 08 10:20:20 default-k8s-diff-port-689864 kubelet[779]: W1108 10:20:20.636540     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f WatchSource:0}: Error finding container 3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f: Status 404 returned error can't find the container with id 3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f
	Nov 08 10:20:26 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:26.551703     779 scope.go:117] "RemoveContainer" containerID="659287dbb35fb9f3c5f294b3d407ab68e0d692a3e3b495078099987b1f73ac69"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:27.559279     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:27.559576     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:27.570913     779 scope.go:117] "RemoveContainer" containerID="659287dbb35fb9f3c5f294b3d407ab68e0d692a3e3b495078099987b1f73ac69"
	Nov 08 10:20:28 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:28.564099     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:28 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:28.564247     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:30 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:30.543543     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:30 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:30.543707     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.273225     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.609869     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.610490     779 scope.go:117] "RemoveContainer" containerID="bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:42.610685     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.632971     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq" podStartSLOduration=10.995161373 podStartE2EDuration="23.63293728s" podCreationTimestamp="2025-11-08 10:20:19 +0000 UTC" firstStartedPulling="2025-11-08 10:20:20.646818805 +0000 UTC m=+13.794495833" lastFinishedPulling="2025-11-08 10:20:33.284594712 +0000 UTC m=+26.432271740" observedRunningTime="2025-11-08 10:20:33.600059849 +0000 UTC m=+26.747736901" watchObservedRunningTime="2025-11-08 10:20:42.63293728 +0000 UTC m=+35.780614308"
	Nov 08 10:20:46 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:46.622790     779 scope.go:117] "RemoveContainer" containerID="f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	Nov 08 10:20:50 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:50.543282     779 scope.go:117] "RemoveContainer" containerID="bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	Nov 08 10:20:50 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:50.543481     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d] <==
	2025/11/08 10:20:33 Starting overwatch
	2025/11/08 10:20:33 Using namespace: kubernetes-dashboard
	2025/11/08 10:20:33 Using in-cluster config to connect to apiserver
	2025/11/08 10:20:33 Using secret token for csrf signing
	2025/11/08 10:20:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:20:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:20:33 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:20:33 Generating JWE encryption key
	2025/11/08 10:20:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:20:34 Initializing JWE encryption key from synchronized object
	2025/11/08 10:20:34 Creating in-cluster Sidecar client
	2025/11/08 10:20:34 Serving insecurely on HTTP port: 9090
	2025/11/08 10:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:21:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6] <==
	I1108 10:20:46.723168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:20:46.757909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:20:46.758102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:20:46.760809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:50.219856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:54.480598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:58.078930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:01.133432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.155672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.160587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:21:04.160747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:21:04.160906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96!
	I1108 10:21:04.161391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ef98997-9490-4868-b14f-87f19e537ac2", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96 became leader
	W1108 10:21:04.168949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.178611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:21:04.261887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96!
	
	
	==> storage-provisioner [f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73] <==
	I1108 10:20:16.292090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:20:46.296347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864: exit status 2 (362.109064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-689864
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-689864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	        "Created": "2025-11-08T10:18:18.537571387Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502445,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:19:58.389892624Z",
	            "FinishedAt": "2025-11-08T10:19:57.267414744Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hostname",
	        "HostsPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/hosts",
	        "LogPath": "/var/lib/docker/containers/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f-json.log",
	        "Name": "/default-k8s-diff-port-689864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-689864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-689864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f",
	                "LowerDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02-init/diff:/var/lib/docker/overlay2/1a2240f8ad9def3d0b1645ea818c536a2a6189c522ead803053f4db16c700d72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc32ac583de155469e9ff9330c3479145f775f954b404e4625125e7ba9be1c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-689864",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-689864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-689864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-689864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41193e97f555cd4f6313bd0889053da7842e2d5bbb221bbaa247fc398183d460",
	            "SandboxKey": "/var/run/docker/netns/41193e97f555",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-689864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:eb:fe:cd:d9:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d632f4190a5769bf708ccc9b7017dc54cf240a895d92fa0248d238a968a6188d",
	                    "EndpointID": "c1f4fccbf2d5c1301b5d9c7300ff155cb0ae9fc5794c9bf653d3e07b3595537a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-689864",
	                        "48dfdc9a3efb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864: exit status 2 (353.190779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-689864 logs -n 25: (1.315382318s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-872727 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-872727                                                                                                                                                                                                                          │ no-preload-872727            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p disable-driver-mounts-708013                                                                                                                                                                                                               │ disable-driver-mounts-708013 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ embed-certs-606645 image list --format=json                                                                                                                                                                                                   │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-606645 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │                     │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-606645                                                                                                                                                                                                                         │ embed-certs-606645           │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:18 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-330758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p newest-cni-330758 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-689864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-689864 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ image   │ newest-cni-330758 image list --format=json                                                                                                                                                                                                    │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-330758 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-689864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:19 UTC │
	│ start   │ -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:19 UTC │ 08 Nov 25 10:20 UTC │
	│ delete  │ -p newest-cni-330758                                                                                                                                                                                                                          │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │ 08 Nov 25 10:20 UTC │
	│ delete  │ -p newest-cni-330758                                                                                                                                                                                                                          │ newest-cni-330758            │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │ 08 Nov 25 10:20 UTC │
	│ start   │ -p auto-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-099098                  │ jenkins │ v1.37.0 │ 08 Nov 25 10:20 UTC │                     │
	│ image   │ default-k8s-diff-port-689864 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:21 UTC │ 08 Nov 25 10:21 UTC │
	│ pause   │ -p default-k8s-diff-port-689864 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-689864 │ jenkins │ v1.37.0 │ 08 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:20:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:20:03.334978  503626 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:20:03.335191  503626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:20:03.335218  503626 out.go:374] Setting ErrFile to fd 2...
	I1108 10:20:03.335239  503626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:20:03.335532  503626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:20:03.335982  503626 out.go:368] Setting JSON to false
	I1108 10:20:03.337152  503626 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10953,"bootTime":1762586251,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:20:03.337296  503626 start.go:143] virtualization:  
	I1108 10:20:03.341174  503626 out.go:179] * [auto-099098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:20:03.345480  503626 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:20:03.345573  503626 notify.go:221] Checking for updates...
	I1108 10:20:03.352884  503626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:20:03.355965  503626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:03.359036  503626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:20:03.366697  503626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:20:03.369723  503626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:20:03.373310  503626 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:03.373486  503626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:20:03.412059  503626 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:20:03.412187  503626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:20:03.510448  503626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:20:03.498608785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:20:03.510560  503626 docker.go:319] overlay module found
	I1108 10:20:03.513796  503626 out.go:179] * Using the docker driver based on user configuration
	I1108 10:20:03.516715  503626 start.go:309] selected driver: docker
	I1108 10:20:03.516736  503626 start.go:930] validating driver "docker" against <nil>
	I1108 10:20:03.516751  503626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:20:03.517506  503626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:20:03.606277  503626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:20:03.596758391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:20:03.606433  503626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:20:03.606677  503626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:20:03.611160  503626 out.go:179] * Using Docker driver with root privileges
	I1108 10:20:03.615937  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:03.616011  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:03.616026  503626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:20:03.616113  503626 start.go:353] cluster config:
	{Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 10:20:03.620254  503626 out.go:179] * Starting "auto-099098" primary control-plane node in "auto-099098" cluster
	I1108 10:20:03.623935  503626 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:20:03.628255  503626 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:20:03.632360  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:03.632438  503626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:20:03.632449  503626 cache.go:59] Caching tarball of preloaded images
	I1108 10:20:03.632541  503626 preload.go:233] Found /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:20:03.632551  503626 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:20:03.632676  503626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json ...
	I1108 10:20:03.632695  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json: {Name:mk3367cfe879ea2688831c700b1d7b410e309342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:03.632831  503626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:20:03.657451  503626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:20:03.657476  503626 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:20:03.657489  503626 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:20:03.657512  503626 start.go:360] acquireMachinesLock for auto-099098: {Name:mk73ec5d6302742e62041fa375ebf76ab0a6f674 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:20:03.657608  503626 start.go:364] duration metric: took 76.005µs to acquireMachinesLock for "auto-099098"
	I1108 10:20:03.657638  503626 start.go:93] Provisioning new machine with config: &{Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:03.657720  503626 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:20:03.033952  502245 provision.go:177] copyRemoteCerts
	I1108 10:20:03.034022  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:20:03.034071  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.070222  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.181014  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:20:03.202252  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 10:20:03.223412  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:20:03.244707  502245 provision.go:87] duration metric: took 848.168118ms to configureAuth
	I1108 10:20:03.244735  502245 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:20:03.245023  502245 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:03.245145  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.272854  502245 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:03.273232  502245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1108 10:20:03.273257  502245 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:20:03.683275  502245 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:20:03.683310  502245 machine.go:97] duration metric: took 4.920245872s to provisionDockerMachine
	I1108 10:20:03.683320  502245 start.go:293] postStartSetup for "default-k8s-diff-port-689864" (driver="docker")
	I1108 10:20:03.683331  502245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:20:03.683378  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:20:03.683426  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.715288  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.833008  502245 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:20:03.838115  502245 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:20:03.838142  502245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:20:03.838153  502245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:20:03.838207  502245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:20:03.838304  502245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:20:03.838411  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:20:03.848695  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:03.875359  502245 start.go:296] duration metric: took 192.023296ms for postStartSetup
	I1108 10:20:03.875437  502245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:20:03.875476  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:03.899426  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.008490  502245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:20:04.013963  502245 fix.go:56] duration metric: took 5.69798708s for fixHost
	I1108 10:20:04.013986  502245 start.go:83] releasing machines lock for "default-k8s-diff-port-689864", held for 5.698035859s
	I1108 10:20:04.014065  502245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-689864
	I1108 10:20:04.044807  502245 ssh_runner.go:195] Run: cat /version.json
	I1108 10:20:04.044880  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:04.045154  502245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:20:04.045221  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:04.087302  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.101287  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:04.305360  502245 ssh_runner.go:195] Run: systemctl --version
	I1108 10:20:04.312189  502245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:20:04.368499  502245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:20:04.373392  502245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:20:04.373539  502245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:20:04.402227  502245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:20:04.402254  502245 start.go:496] detecting cgroup driver to use...
	I1108 10:20:04.402309  502245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:20:04.402421  502245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:20:04.427371  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:20:04.451711  502245 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:20:04.451784  502245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:20:04.469062  502245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:20:04.483147  502245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:20:04.676214  502245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:20:04.837264  502245 docker.go:234] disabling docker service ...
	I1108 10:20:04.837332  502245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:20:04.856182  502245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:20:04.880217  502245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:20:05.146158  502245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:20:05.297722  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:20:05.316812  502245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:20:05.333675  502245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:20:05.333750  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.343677  502245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:20:05.343744  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.353575  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.362998  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.372686  502245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:20:05.381532  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.391275  502245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.403333  502245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:05.413162  502245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:20:05.421824  502245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:20:05.430161  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:05.581285  502245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:20:06.265951  502245 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:20:06.266022  502245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:20:06.271546  502245 start.go:564] Will wait 60s for crictl version
	I1108 10:20:06.271615  502245 ssh_runner.go:195] Run: which crictl
	I1108 10:20:06.275477  502245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:20:06.315266  502245 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:20:06.315365  502245 ssh_runner.go:195] Run: crio --version
	I1108 10:20:06.350245  502245 ssh_runner.go:195] Run: crio --version
	I1108 10:20:06.392314  502245 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:20:06.395568  502245 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-689864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:06.410963  502245 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:20:06.415566  502245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:06.426437  502245 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:20:06.426543  502245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:06.426612  502245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:06.467426  502245 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:06.467445  502245 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:20:06.467500  502245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:06.496613  502245 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:06.496692  502245 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:20:06.496716  502245 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1108 10:20:06.496861  502245 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-689864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:20:06.496991  502245 ssh_runner.go:195] Run: crio config
	I1108 10:20:06.580299  502245 cni.go:84] Creating CNI manager for ""
	I1108 10:20:06.580370  502245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:06.580408  502245 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:20:06.580461  502245 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-689864 NodeName:default-k8s-diff-port-689864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:20:06.580660  502245 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-689864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:20:06.580781  502245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:20:06.591327  502245 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:20:06.591448  502245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:20:06.603106  502245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 10:20:06.617873  502245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:20:06.632265  502245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1108 10:20:06.646981  502245 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:20:06.651166  502245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:06.661853  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:06.813129  502245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:06.835862  502245 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864 for IP: 192.168.85.2
	I1108 10:20:06.835950  502245 certs.go:195] generating shared ca certs ...
	I1108 10:20:06.835984  502245 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:06.836162  502245 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:20:06.836243  502245 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:20:06.836281  502245 certs.go:257] generating profile certs ...
	I1108 10:20:06.836424  502245 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.key
	I1108 10:20:06.836546  502245 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key.d58dafe4
	I1108 10:20:06.836630  502245 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key
	I1108 10:20:06.836796  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:20:06.836860  502245 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:20:06.836885  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:20:06.837029  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:20:06.837098  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:20:06.837163  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:20:06.837257  502245 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:06.838154  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:20:06.860006  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:20:06.880217  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:20:06.908534  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:20:06.928482  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 10:20:06.949867  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 10:20:06.970374  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:20:06.990339  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:20:07.013343  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:20:07.034264  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:20:07.053891  502245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:20:07.076934  502245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:20:07.092733  502245 ssh_runner.go:195] Run: openssl version
	I1108 10:20:07.104475  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:20:07.119974  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.127899  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.128016  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:07.233360  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:20:07.249497  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:20:07.276144  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.281754  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.281881  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:20:07.329006  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:20:07.338027  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:20:07.347279  502245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.351938  502245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.352056  502245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:20:07.395582  502245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:20:07.404869  502245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:20:07.409547  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:20:07.451479  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:20:07.496750  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:20:07.538899  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:20:07.581177  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:20:07.623182  502245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:20:07.669414  502245 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-689864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-689864 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:20:07.669573  502245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:20:07.669669  502245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:20:07.767625  502245 cri.go:89] found id: ""
	I1108 10:20:07.767744  502245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:20:07.782953  502245 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:20:07.782969  502245 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:20:07.783026  502245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:20:07.798193  502245 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:20:07.798643  502245 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-689864" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:07.798789  502245 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-292236/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-689864" cluster setting kubeconfig missing "default-k8s-diff-port-689864" context setting]
	I1108 10:20:07.799106  502245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.800511  502245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:20:07.818954  502245 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:20:07.818987  502245 kubeadm.go:602] duration metric: took 36.01178ms to restartPrimaryControlPlane
	I1108 10:20:07.818996  502245 kubeadm.go:403] duration metric: took 149.60455ms to StartCluster
	I1108 10:20:07.819031  502245 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.819114  502245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:07.819769  502245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:07.820080  502245 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:07.820369  502245 config.go:182] Loaded profile config "default-k8s-diff-port-689864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:07.820412  502245 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:20:07.820472  502245 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.820486  502245 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.820492  502245 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:20:07.820512  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.820844  502245 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.820868  502245 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.820875  502245 addons.go:248] addon dashboard should already be in state true
	I1108 10:20:07.820894  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.821417  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.821815  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.824992  502245 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-689864"
	I1108 10:20:07.825026  502245 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-689864"
	I1108 10:20:07.825332  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.846574  502245 out.go:179] * Verifying Kubernetes components...
	I1108 10:20:07.858091  502245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:07.870106  502245 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:20:07.878103  502245 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:07.878127  502245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:20:07.878194  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.886854  502245 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-689864"
	W1108 10:20:07.886877  502245 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:20:07.886903  502245 host.go:66] Checking if "default-k8s-diff-port-689864" exists ...
	I1108 10:20:07.893094  502245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-689864 --format={{.State.Status}}
	I1108 10:20:07.912987  502245 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:20:07.925202  502245 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:20:07.925587  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:07.932425  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:20:07.932452  502245 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:20:07.932527  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.934805  502245 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:07.934838  502245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:20:07.934898  502245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-689864
	I1108 10:20:07.995317  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:07.999954  502245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/default-k8s-diff-port-689864/id_rsa Username:docker}
	I1108 10:20:03.663388  503626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:20:03.663635  503626 start.go:159] libmachine.API.Create for "auto-099098" (driver="docker")
	I1108 10:20:03.663675  503626 client.go:173] LocalClient.Create starting
	I1108 10:20:03.663738  503626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem
	I1108 10:20:03.663778  503626 main.go:143] libmachine: Decoding PEM data...
	I1108 10:20:03.663796  503626 main.go:143] libmachine: Parsing certificate...
	I1108 10:20:03.663851  503626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem
	I1108 10:20:03.663876  503626 main.go:143] libmachine: Decoding PEM data...
	I1108 10:20:03.663892  503626 main.go:143] libmachine: Parsing certificate...
	I1108 10:20:03.664266  503626 cli_runner.go:164] Run: docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:20:03.682980  503626 cli_runner.go:211] docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:20:03.683068  503626 network_create.go:284] running [docker network inspect auto-099098] to gather additional debugging logs...
	I1108 10:20:03.683085  503626 cli_runner.go:164] Run: docker network inspect auto-099098
	W1108 10:20:03.709405  503626 cli_runner.go:211] docker network inspect auto-099098 returned with exit code 1
	I1108 10:20:03.709435  503626 network_create.go:287] error running [docker network inspect auto-099098]: docker network inspect auto-099098: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-099098 not found
	I1108 10:20:03.709449  503626 network_create.go:289] output of [docker network inspect auto-099098]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-099098 not found
	
	** /stderr **
	I1108 10:20:03.709658  503626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:03.742983  503626 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
	I1108 10:20:03.743372  503626 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b7578d4e53a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:0e:c7:1b:2a:5b} reservation:<nil>}
	I1108 10:20:03.743598  503626 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5cf16d60bb82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:72:e5:fb:ef:34:ac} reservation:<nil>}
	I1108 10:20:03.744010  503626 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd140}
	I1108 10:20:03.744028  503626 network_create.go:124] attempt to create docker network auto-099098 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:20:03.744081  503626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-099098 auto-099098
	I1108 10:20:03.812568  503626 network_create.go:108] docker network auto-099098 192.168.76.0/24 created
	I1108 10:20:03.812596  503626 kic.go:121] calculated static IP "192.168.76.2" for the "auto-099098" container
	I1108 10:20:03.812683  503626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:20:03.832256  503626 cli_runner.go:164] Run: docker volume create auto-099098 --label name.minikube.sigs.k8s.io=auto-099098 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:20:03.857724  503626 oci.go:103] Successfully created a docker volume auto-099098
	I1108 10:20:03.857821  503626 cli_runner.go:164] Run: docker run --rm --name auto-099098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-099098 --entrypoint /usr/bin/test -v auto-099098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:20:04.505300  503626 oci.go:107] Successfully prepared a docker volume auto-099098
	I1108 10:20:04.505361  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:04.505380  503626 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:20:04.505452  503626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-099098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:20:08.181044  502245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:08.267359  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:08.283766  502245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:20:08.321334  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:08.443988  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:20:08.444054  502245 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:20:08.520501  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:20:08.520566  502245 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:20:08.578616  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:20:08.578683  502245 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:20:08.606281  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:20:08.606347  502245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:20:08.625989  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:20:08.626057  502245 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:20:08.648848  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:20:08.648944  502245 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:20:08.677963  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:20:08.678037  502245 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:20:08.705383  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:20:08.705459  502245 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:20:08.739392  502245 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:20:08.739422  502245 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:20:08.795876  502245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:20:08.721238  503626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-099098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215740962s)
	I1108 10:20:08.721268  503626 kic.go:203] duration metric: took 4.215883848s to extract preloaded images to volume ...
	W1108 10:20:08.721407  503626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:20:08.721517  503626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:20:08.829633  503626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-099098 --name auto-099098 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-099098 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-099098 --network auto-099098 --ip 192.168.76.2 --volume auto-099098:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:20:09.311763  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Running}}
	I1108 10:20:09.339427  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:09.370383  503626 cli_runner.go:164] Run: docker exec auto-099098 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:20:09.429247  503626 oci.go:144] the created container "auto-099098" has a running status.
	I1108 10:20:09.429279  503626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa...
	I1108 10:20:10.450434  503626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:20:10.474336  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:10.497961  503626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:20:10.497980  503626 kic_runner.go:114] Args: [docker exec --privileged auto-099098 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:20:10.570189  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:10.598564  503626 machine.go:94] provisionDockerMachine start ...
	I1108 10:20:10.598674  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:10.631274  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:10.631609  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:10.631618  503626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:20:10.633148  503626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:20:14.086145  502245 node_ready.go:49] node "default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:14.086174  502245 node_ready.go:38] duration metric: took 5.802372364s for node "default-k8s-diff-port-689864" to be "Ready" ...
	I1108 10:20:14.086190  502245 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:20:14.086267  502245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:20:14.389186  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.121796936s)
	I1108 10:20:16.676544  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.355132783s)
	I1108 10:20:16.676672  502245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.880769154s)
	I1108 10:20:16.676841  502245 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.590561836s)
	I1108 10:20:16.676859  502245 api_server.go:72] duration metric: took 8.856751379s to wait for apiserver process to appear ...
	I1108 10:20:16.676866  502245 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:20:16.676881  502245 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:20:16.679761  502245 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-689864 addons enable metrics-server
	
	I1108 10:20:16.682636  502245 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1108 10:20:13.824712  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-099098
	
	I1108 10:20:13.824738  503626 ubuntu.go:182] provisioning hostname "auto-099098"
	I1108 10:20:13.824836  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:13.854478  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:13.854844  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:13.854857  503626 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-099098 && echo "auto-099098" | sudo tee /etc/hostname
	I1108 10:20:14.042582  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-099098
	
	I1108 10:20:14.042741  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:14.070136  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:14.070442  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:14.070459  503626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-099098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-099098/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-099098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:20:14.273730  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:20:14.273755  503626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-292236/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-292236/.minikube}
	I1108 10:20:14.273778  503626 ubuntu.go:190] setting up certificates
	I1108 10:20:14.273788  503626 provision.go:84] configureAuth start
	I1108 10:20:14.273848  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:14.311095  503626 provision.go:143] copyHostCerts
	I1108 10:20:14.311162  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem, removing ...
	I1108 10:20:14.311171  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem
	I1108 10:20:14.311248  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/ca.pem (1082 bytes)
	I1108 10:20:14.311340  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem, removing ...
	I1108 10:20:14.311346  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem
	I1108 10:20:14.311371  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/cert.pem (1123 bytes)
	I1108 10:20:14.311418  503626 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem, removing ...
	I1108 10:20:14.311427  503626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem
	I1108 10:20:14.311450  503626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-292236/.minikube/key.pem (1675 bytes)
	I1108 10:20:14.311496  503626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem org=jenkins.auto-099098 san=[127.0.0.1 192.168.76.2 auto-099098 localhost minikube]
	I1108 10:20:14.830254  503626 provision.go:177] copyRemoteCerts
	I1108 10:20:14.830439  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:20:14.830512  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:14.850771  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:14.968314  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:20:15.007561  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:20:15.046388  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 10:20:15.079360  503626 provision.go:87] duration metric: took 805.547891ms to configureAuth
	I1108 10:20:15.079392  503626 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:20:15.079587  503626 config.go:182] Loaded profile config "auto-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:15.079701  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.107118  503626 main.go:143] libmachine: Using SSH client type: native
	I1108 10:20:15.107449  503626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1108 10:20:15.107471  503626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:20:15.509241  503626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:20:15.509340  503626 machine.go:97] duration metric: took 4.910740836s to provisionDockerMachine
	I1108 10:20:15.509366  503626 client.go:176] duration metric: took 11.845678989s to LocalClient.Create
	I1108 10:20:15.509413  503626 start.go:167] duration metric: took 11.845777846s to libmachine.API.Create "auto-099098"
	I1108 10:20:15.509442  503626 start.go:293] postStartSetup for "auto-099098" (driver="docker")
	I1108 10:20:15.509467  503626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:20:15.509561  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:20:15.509625  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.538177  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.663556  503626 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:20:15.667820  503626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:20:15.667861  503626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:20:15.667873  503626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/addons for local assets ...
	I1108 10:20:15.667926  503626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-292236/.minikube/files for local assets ...
	I1108 10:20:15.668012  503626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem -> 2940852.pem in /etc/ssl/certs
	I1108 10:20:15.668127  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:20:15.683314  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:15.706228  503626 start.go:296] duration metric: took 196.756937ms for postStartSetup
	I1108 10:20:15.706641  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:15.738669  503626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/config.json ...
	I1108 10:20:15.738944  503626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:20:15.739005  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.773129  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.896090  503626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:20:15.902893  503626 start.go:128] duration metric: took 12.245155193s to createHost
	I1108 10:20:15.902923  503626 start.go:83] releasing machines lock for "auto-099098", held for 12.24529963s
	I1108 10:20:15.903002  503626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-099098
	I1108 10:20:15.931046  503626 ssh_runner.go:195] Run: cat /version.json
	I1108 10:20:15.931103  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.931355  503626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:20:15.931424  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:15.967528  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:15.969690  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:16.093889  503626 ssh_runner.go:195] Run: systemctl --version
	I1108 10:20:16.225003  503626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:20:16.295304  503626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:20:16.301608  503626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:20:16.301690  503626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:20:16.341909  503626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:20:16.341945  503626 start.go:496] detecting cgroup driver to use...
	I1108 10:20:16.341981  503626 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:20:16.342046  503626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:20:16.372527  503626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:20:16.386264  503626 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:20:16.386337  503626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:20:16.405731  503626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:20:16.427506  503626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:20:16.619498  503626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:20:16.807453  503626 docker.go:234] disabling docker service ...
	I1108 10:20:16.807535  503626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:20:16.844101  503626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:20:16.867032  503626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:20:17.064545  503626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:20:17.204250  503626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:20:17.223019  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:20:17.243229  503626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:20:17.243296  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.254468  503626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:20:17.254549  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.266723  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.284442  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.296771  503626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:20:17.306431  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.317301  503626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.336763  503626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:20:17.347096  503626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:20:17.359733  503626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:20:17.372601  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:17.501015  503626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:20:17.647190  503626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:20:17.647302  503626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:20:17.652866  503626 start.go:564] Will wait 60s for crictl version
	I1108 10:20:17.652955  503626 ssh_runner.go:195] Run: which crictl
	I1108 10:20:17.661695  503626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:20:17.703389  503626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:20:17.703514  503626 ssh_runner.go:195] Run: crio --version
	I1108 10:20:17.749859  503626 ssh_runner.go:195] Run: crio --version
	I1108 10:20:17.791856  503626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:20:16.685508  502245 addons.go:515] duration metric: took 8.865081216s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1108 10:20:16.702162  502245 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:20:16.702190  502245 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:20:17.177811  502245 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:20:17.186732  502245 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1108 10:20:17.188237  502245 api_server.go:141] control plane version: v1.34.1
	I1108 10:20:17.188258  502245 api_server.go:131] duration metric: took 511.386318ms to wait for apiserver health ...
	I1108 10:20:17.188267  502245 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:20:17.192608  502245 system_pods.go:59] 8 kube-system pods found
	I1108 10:20:17.192657  502245 system_pods.go:61] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:20:17.192666  502245 system_pods.go:61] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:20:17.192673  502245 system_pods.go:61] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:20:17.192679  502245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:20:17.192686  502245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:20:17.192691  502245 system_pods.go:61] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:20:17.192706  502245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:20:17.192711  502245 system_pods.go:61] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Running
	I1108 10:20:17.192718  502245 system_pods.go:74] duration metric: took 4.444802ms to wait for pod list to return data ...
	I1108 10:20:17.192727  502245 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:20:17.195504  502245 default_sa.go:45] found service account: "default"
	I1108 10:20:17.195525  502245 default_sa.go:55] duration metric: took 2.792396ms for default service account to be created ...
	I1108 10:20:17.195535  502245 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:20:17.199571  502245 system_pods.go:86] 8 kube-system pods found
	I1108 10:20:17.199603  502245 system_pods.go:89] "coredns-66bc5c9577-5nhxx" [ae48e4e7-48a3-4cc4-be6f-1102abd83f25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:20:17.199613  502245 system_pods.go:89] "etcd-default-k8s-diff-port-689864" [78cc584e-cc4b-499b-a3b5-094712ebc4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:20:17.199618  502245 system_pods.go:89] "kindnet-c98xc" [adc3d88d-8c83-4dab-958c-42c33e6f43f3] Running
	I1108 10:20:17.199626  502245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-689864" [c5808395-3c00-40c6-b9b0-ba89b22436ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:20:17.199633  502245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-689864" [00f28beb-d4d8-4fa0-8d35-f8c0f2a0a09e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:20:17.199639  502245 system_pods.go:89] "kube-proxy-lcscg" [096de2a8-f856-4f6c-ac17-c3e8f292ac77] Running
	I1108 10:20:17.199646  502245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-689864" [de78c3f6-6c2b-4d1b-813a-4c9b69349129] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:20:17.199650  502245 system_pods.go:89] "storage-provisioner" [5a04d7b1-40e4-474f-acab-716d8e5e70de] Running
	I1108 10:20:17.199657  502245 system_pods.go:126] duration metric: took 4.117013ms to wait for k8s-apps to be running ...
	I1108 10:20:17.199665  502245 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:20:17.199724  502245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:20:17.219717  502245 system_svc.go:56] duration metric: took 20.04187ms WaitForService to wait for kubelet
	I1108 10:20:17.219747  502245 kubeadm.go:587] duration metric: took 9.399638248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:20:17.219767  502245 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:20:17.223773  502245 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:20:17.223817  502245 node_conditions.go:123] node cpu capacity is 2
	I1108 10:20:17.223841  502245 node_conditions.go:105] duration metric: took 4.056442ms to run NodePressure ...
	I1108 10:20:17.223855  502245 start.go:242] waiting for startup goroutines ...
	I1108 10:20:17.223866  502245 start.go:247] waiting for cluster config update ...
	I1108 10:20:17.223878  502245 start.go:256] writing updated cluster config ...
	I1108 10:20:17.224246  502245 ssh_runner.go:195] Run: rm -f paused
	I1108 10:20:17.228990  502245 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:20:17.233564  502245 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:17.794983  503626 cli_runner.go:164] Run: docker network inspect auto-099098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:20:17.813597  503626 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:20:17.819181  503626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:17.834209  503626 kubeadm.go:884] updating cluster {Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:20:17.834322  503626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:20:17.834381  503626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:17.873820  503626 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:17.873842  503626 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:20:17.873909  503626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:20:17.905182  503626 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:20:17.905205  503626 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:20:17.905214  503626 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:20:17.905314  503626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-099098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:20:17.905408  503626 ssh_runner.go:195] Run: crio config
	I1108 10:20:17.984042  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:17.984068  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:17.984087  503626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:20:17.984112  503626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-099098 NodeName:auto-099098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:20:17.984245  503626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-099098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:20:17.984322  503626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:20:17.993714  503626 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:20:17.993794  503626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:20:18.005490  503626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1108 10:20:18.023843  503626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:20:18.039871  503626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 10:20:18.055370  503626 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:20:18.059569  503626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:20:18.070559  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:18.197572  503626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:18.215254  503626 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098 for IP: 192.168.76.2
	I1108 10:20:18.215277  503626 certs.go:195] generating shared ca certs ...
	I1108 10:20:18.215306  503626 certs.go:227] acquiring lock for ca certs: {Name:mk8ec14d0ad897585e1f70faa4c95e98a047be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.215464  503626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key
	I1108 10:20:18.215527  503626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key
	I1108 10:20:18.215539  503626 certs.go:257] generating profile certs ...
	I1108 10:20:18.215595  503626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key
	I1108 10:20:18.215612  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt with IP's: []
	I1108 10:20:18.439389  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt ...
	I1108 10:20:18.439423  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: {Name:mk48f84091bb4f7ebb55d343a2c2dcfb7a96e7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.439629  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key ...
	I1108 10:20:18.439643  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.key: {Name:mk5c72a5c1391c41a11543be183ce76064829017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:18.439739  503626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e
	I1108 10:20:18.439756  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:20:19.228036  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e ...
	I1108 10:20:19.228068  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e: {Name:mkac92b1c434de07c5fdf64afb851ccf96850720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.228266  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e ...
	I1108 10:20:19.228282  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e: {Name:mked99514682fd6af203cb6fa4464878356fc197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.228376  503626 certs.go:382] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt.7311688e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt
	I1108 10:20:19.228452  503626 certs.go:386] copying /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key.7311688e -> /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key
	I1108 10:20:19.228513  503626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key
	I1108 10:20:19.228529  503626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt with IP's: []
	I1108 10:20:19.695246  503626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt ...
	I1108 10:20:19.695277  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt: {Name:mk030e0fce32a980894cd8b2b0800997b2502b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.695494  503626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key ...
	I1108 10:20:19.695509  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key: {Name:mk1e1af3f347c6d806904a98a617f6d146be840e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:19.695712  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem (1338 bytes)
	W1108 10:20:19.695756  503626 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085_empty.pem, impossibly tiny 0 bytes
	I1108 10:20:19.695771  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 10:20:19.695797  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/ca.pem (1082 bytes)
	I1108 10:20:19.695825  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:20:19.695852  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/certs/key.pem (1675 bytes)
	I1108 10:20:19.695900  503626 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem (1708 bytes)
	I1108 10:20:19.696505  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:20:19.723601  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:20:19.743323  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:20:19.760316  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:20:19.777777  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 10:20:19.795262  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:20:19.812878  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:20:19.830427  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:20:19.847902  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/certs/294085.pem --> /usr/share/ca-certificates/294085.pem (1338 bytes)
	I1108 10:20:19.865805  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/ssl/certs/2940852.pem --> /usr/share/ca-certificates/2940852.pem (1708 bytes)
	I1108 10:20:19.883997  503626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:20:19.901438  503626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:20:19.914690  503626 ssh_runner.go:195] Run: openssl version
	I1108 10:20:19.921880  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:20:19.930320  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.934797  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.934866  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:20:19.985911  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:20:20.004813  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294085.pem && ln -fs /usr/share/ca-certificates/294085.pem /etc/ssl/certs/294085.pem"
	I1108 10:20:20.027551  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.033227  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:20 /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.033296  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294085.pem
	I1108 10:20:20.093126  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294085.pem /etc/ssl/certs/51391683.0"
	I1108 10:20:20.102339  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940852.pem && ln -fs /usr/share/ca-certificates/2940852.pem /etc/ssl/certs/2940852.pem"
	I1108 10:20:20.111436  503626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.115804  503626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:20 /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.115870  503626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940852.pem
	I1108 10:20:20.157075  503626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940852.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:20:20.165718  503626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:20:20.169825  503626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:20:20.169873  503626 kubeadm.go:401] StartCluster: {Name:auto-099098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-099098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:20:20.169946  503626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:20:20.170016  503626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:20:20.197401  503626 cri.go:89] found id: ""
	I1108 10:20:20.197481  503626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:20:20.205095  503626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:20:20.212703  503626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:20:20.212830  503626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:20:20.220487  503626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:20:20.220509  503626 kubeadm.go:158] found existing configuration files:
	
	I1108 10:20:20.220591  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:20:20.228575  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:20:20.228683  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:20:20.238539  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:20:20.247222  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:20:20.247290  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:20:20.254670  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:20:20.262363  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:20:20.262474  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:20:20.270671  503626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:20:20.278535  503626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:20:20.278598  503626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:20:20.286143  503626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:20:20.330179  503626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:20:20.330395  503626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:20:20.353647  503626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:20:20.353794  503626 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:20:20.353868  503626 kubeadm.go:319] OS: Linux
	I1108 10:20:20.353958  503626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:20:20.354054  503626 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:20:20.354147  503626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:20:20.354255  503626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:20:20.354366  503626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:20:20.354439  503626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:20:20.354492  503626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:20:20.354548  503626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:20:20.354607  503626 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:20:20.433714  503626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:20:20.433911  503626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:20:20.434031  503626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:20:20.442804  503626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:20:19.249418  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:21.739563  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:20.448453  503626 out.go:252]   - Generating certificates and keys ...
	I1108 10:20:20.448582  503626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:20:20.448685  503626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:20:21.115442  503626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:20:21.650391  503626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:20:22.037090  503626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:20:22.532346  503626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1108 10:20:23.740927  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:26.239903  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:23.468653  503626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:20:23.468863  503626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-099098 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:20:24.045954  503626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:20:24.046298  503626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-099098 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:20:24.393137  503626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:20:25.955484  503626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:20:26.271157  503626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:20:26.271443  503626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:20:26.443424  503626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:20:26.957141  503626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:20:27.637013  503626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:20:29.176412  503626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:20:29.689906  503626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:20:29.690664  503626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:20:29.695036  503626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1108 10:20:28.247187  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:30.739446  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:32.740026  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:29.699179  503626 out.go:252]   - Booting up control plane ...
	I1108 10:20:29.699298  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:20:29.700736  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:20:29.706415  503626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:20:29.725244  503626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:20:29.725353  503626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:20:29.733360  503626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:20:29.733748  503626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:20:29.733796  503626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:20:29.923848  503626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:20:29.923982  503626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:20:31.425327  503626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501228832s
	I1108 10:20:31.428285  503626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:20:31.428386  503626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:20:31.428636  503626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:20:31.428720  503626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:20:34.754268  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:37.239532  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:35.848760  503626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.419783196s
	I1108 10:20:37.724591  503626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.295488814s
	I1108 10:20:39.429966  503626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.001415084s
	I1108 10:20:39.451340  503626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:20:39.468257  503626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:20:39.483937  503626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:20:39.485612  503626 kubeadm.go:319] [mark-control-plane] Marking the node auto-099098 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:20:39.501523  503626 kubeadm.go:319] [bootstrap-token] Using token: 99ar74.rb3xng62osk9vs1i
	I1108 10:20:39.504481  503626 out.go:252]   - Configuring RBAC rules ...
	I1108 10:20:39.504620  503626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:20:39.510090  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:20:39.519109  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:20:39.523922  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:20:39.528674  503626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:20:39.534968  503626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:20:39.839101  503626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:20:40.353273  503626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:20:40.837928  503626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:20:40.838883  503626 kubeadm.go:319] 
	I1108 10:20:40.838961  503626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:20:40.838973  503626 kubeadm.go:319] 
	I1108 10:20:40.839055  503626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:20:40.839063  503626 kubeadm.go:319] 
	I1108 10:20:40.839090  503626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:20:40.839174  503626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:20:40.839238  503626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:20:40.839244  503626 kubeadm.go:319] 
	I1108 10:20:40.839301  503626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:20:40.839305  503626 kubeadm.go:319] 
	I1108 10:20:40.839355  503626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:20:40.839360  503626 kubeadm.go:319] 
	I1108 10:20:40.839414  503626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:20:40.839493  503626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:20:40.839564  503626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:20:40.839569  503626 kubeadm.go:319] 
	I1108 10:20:40.839662  503626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:20:40.839742  503626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:20:40.839746  503626 kubeadm.go:319] 
	I1108 10:20:40.839833  503626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 99ar74.rb3xng62osk9vs1i \
	I1108 10:20:40.839941  503626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca \
	I1108 10:20:40.839962  503626 kubeadm.go:319] 	--control-plane 
	I1108 10:20:40.839968  503626 kubeadm.go:319] 
	I1108 10:20:40.840056  503626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:20:40.840060  503626 kubeadm.go:319] 
	I1108 10:20:40.840146  503626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 99ar74.rb3xng62osk9vs1i \
	I1108 10:20:40.840546  503626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1af093ed7dff9f44a1fce642e37ec2902270e1f7db9f3629423ca1a6b5e81aca 
	I1108 10:20:40.845539  503626 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:20:40.845807  503626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:20:40.845946  503626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:20:40.845976  503626 cni.go:84] Creating CNI manager for ""
	I1108 10:20:40.845989  503626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:20:40.851100  503626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 10:20:39.739883  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:41.740011  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:40.854166  503626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:20:40.859439  503626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:20:40.859459  503626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:20:40.874325  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:20:41.172257  503626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:20:41.172387  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:41.172467  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-099098 minikube.k8s.io/updated_at=2025_11_08T10_20_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=auto-099098 minikube.k8s.io/primary=true
	I1108 10:20:41.366898  503626 ops.go:34] apiserver oom_adj: -16
	I1108 10:20:41.366922  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:41.867693  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:42.367790  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:42.867025  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:43.367194  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:43.867975  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:44.366985  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:44.867477  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:45.367339  503626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:20:45.517931  503626 kubeadm.go:1114] duration metric: took 4.345589369s to wait for elevateKubeSystemPrivileges
	I1108 10:20:45.517957  503626 kubeadm.go:403] duration metric: took 25.348085992s to StartCluster
	I1108 10:20:45.517974  503626 settings.go:142] acquiring lock: {Name:mk8e7ec2fd6d0c3577198d2131f5cc3ad26178bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:45.518048  503626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:20:45.519089  503626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/kubeconfig: {Name:mk432f670e2ba850597bc7d97d115cd59ab4f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:20:45.519328  503626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:20:45.519417  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:20:45.519677  503626 config.go:182] Loaded profile config "auto-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:20:45.519716  503626 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:20:45.519775  503626 addons.go:70] Setting storage-provisioner=true in profile "auto-099098"
	I1108 10:20:45.519788  503626 addons.go:239] Setting addon storage-provisioner=true in "auto-099098"
	I1108 10:20:45.519808  503626 host.go:66] Checking if "auto-099098" exists ...
	I1108 10:20:45.520532  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.521004  503626 addons.go:70] Setting default-storageclass=true in profile "auto-099098"
	I1108 10:20:45.521031  503626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-099098"
	I1108 10:20:45.521336  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.523685  503626 out.go:179] * Verifying Kubernetes components...
	I1108 10:20:45.529554  503626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:20:45.564140  503626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:20:45.564779  503626 addons.go:239] Setting addon default-storageclass=true in "auto-099098"
	I1108 10:20:45.564818  503626 host.go:66] Checking if "auto-099098" exists ...
	I1108 10:20:45.565430  503626 cli_runner.go:164] Run: docker container inspect auto-099098 --format={{.State.Status}}
	I1108 10:20:45.567523  503626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:45.567543  503626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:20:45.567602  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:45.594513  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:45.609492  503626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:45.609513  503626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:20:45.609579  503626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-099098
	I1108 10:20:45.639524  503626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/auto-099098/id_rsa Username:docker}
	I1108 10:20:45.929853  503626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:20:45.938943  503626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:20:45.965111  503626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:20:45.965227  503626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:20:47.000123  503626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061143746s)
	I1108 10:20:47.000327  503626 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.035077173s)
	I1108 10:20:47.000548  503626 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035369237s)
	I1108 10:20:47.000573  503626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:20:47.002665  503626 node_ready.go:35] waiting up to 15m0s for node "auto-099098" to be "Ready" ...
	I1108 10:20:47.004295  503626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1108 10:20:44.239463  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	W1108 10:20:46.240997  502245 pod_ready.go:104] pod "coredns-66bc5c9577-5nhxx" is not "Ready", error: <nil>
	I1108 10:20:47.738788  502245 pod_ready.go:94] pod "coredns-66bc5c9577-5nhxx" is "Ready"
	I1108 10:20:47.738815  502245 pod_ready.go:86] duration metric: took 30.505173686s for pod "coredns-66bc5c9577-5nhxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.741382  502245 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.745869  502245 pod_ready.go:94] pod "etcd-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.745897  502245 pod_ready.go:86] duration metric: took 4.491229ms for pod "etcd-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.748220  502245 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.753342  502245 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.753421  502245 pod_ready.go:86] duration metric: took 5.175091ms for pod "kube-apiserver-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.755717  502245 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.937087  502245 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:47.937115  502245 pod_ready.go:86] duration metric: took 181.3715ms for pod "kube-controller-manager-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:47.008303  503626 addons.go:515] duration metric: took 1.488566288s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 10:20:47.504388  503626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-099098" context rescaled to 1 replicas
	I1108 10:20:48.137702  502245 pod_ready.go:83] waiting for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:48.536435  502245 pod_ready.go:94] pod "kube-proxy-lcscg" is "Ready"
	I1108 10:20:48.536523  502245 pod_ready.go:86] duration metric: took 398.792922ms for pod "kube-proxy-lcscg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:48.737421  502245 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:49.137302  502245 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-689864" is "Ready"
	I1108 10:20:49.137332  502245 pod_ready.go:86] duration metric: took 399.884225ms for pod "kube-scheduler-default-k8s-diff-port-689864" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:20:49.137346  502245 pod_ready.go:40] duration metric: took 31.908270404s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:20:49.198305  502245 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:20:49.201379  502245 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-689864" cluster and "default" namespace by default
	W1108 10:20:49.005975  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:51.006361  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:53.007455  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:55.012894  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:20:57.506470  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:21:00.012902  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	W1108 10:21:02.505622  503626 node_ready.go:57] node "auto-099098" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 10:20:42 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:42.634008166Z" level=info msg="Removed container 40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65/dashboard-metrics-scraper" id=2503aea4-a24c-47b6-a667-2700bb25f982 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 conmon[1120]: conmon f4e51831398ac84ed173 <ninfo>: container 1123 exited with status 1
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.623332816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1650373c-3963-49ee-a0db-7c79380465f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.628219043Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a215dc2b-1075-4937-8da9-45f1d284969f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.629363957Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6483aadd-8c05-4d6e-b8d7-e662f789bb19 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.629492926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643316851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643555623Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dba6b595ab1636cba456183e2622e0662913f2851aada75e977b12165437892a/merged/etc/passwd: no such file or directory"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643582823Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dba6b595ab1636cba456183e2622e0662913f2851aada75e977b12165437892a/merged/etc/group: no such file or directory"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.643929452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.679829109Z" level=info msg="Created container 10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6: kube-system/storage-provisioner/storage-provisioner" id=6483aadd-8c05-4d6e-b8d7-e662f789bb19 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.681413485Z" level=info msg="Starting container: 10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6" id=3bb60c86-3b7a-4bef-9fd0-9a3d02b37a7f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:20:46 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:46.684134793Z" level=info msg="Started container" PID=1637 containerID=10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6 description=kube-system/storage-provisioner/storage-provisioner id=3bb60c86-3b7a-4bef-9fd0-9a3d02b37a7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae79730b26f2003745703ef3eecb3e8c4fe3071ecf99dfa35f8c45159cad11c4
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.622230663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629553083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629590647Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.629614007Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.632950631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.632987186Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.633008142Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.636178225Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.636211382Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.63623384Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.640037299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:20:55 default-k8s-diff-port-689864 crio[652]: time="2025-11-08T10:20:55.640078194Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	10d2d6703c42d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   ae79730b26f20       storage-provisioner                                    kube-system
	bb8f6efdfd72d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   9537bbde8749a       dashboard-metrics-scraper-6ffb444bf9-fgx65             kubernetes-dashboard
	acb4867f3275e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   3b9585af6566a       kubernetes-dashboard-855c9754f9-j9bdq                  kubernetes-dashboard
	a3ddbd760444e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   ad860b7e30669       busybox                                                default
	5a084c94a897e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   886fcc207e406       coredns-66bc5c9577-5nhxx                               kube-system
	0455a60ba551b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   986f7ac3b098a       kindnet-c98xc                                          kube-system
	762f453d0ed14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   0c4c84de916cf       kube-proxy-lcscg                                       kube-system
	f4e51831398ac       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   ae79730b26f20       storage-provisioner                                    kube-system
	3c3f47aaf8c2b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   5e1de017e5a3d       kube-controller-manager-default-k8s-diff-port-689864   kube-system
	7c3023cf0ac48       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   9cd16a947f849       kube-apiserver-default-k8s-diff-port-689864            kube-system
	4b189591b949c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   54e9ef6890ab8       etcd-default-k8s-diff-port-689864                      kube-system
	0ae22b5caa485       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   dd541acd7adf7       kube-scheduler-default-k8s-diff-port-689864            kube-system
	
	
	==> coredns [5a084c94a897ef0faff55cd4571b9c32e4916c363b93ae5dc26fed7fccd7e734] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56971 - 63809 "HINFO IN 377492404350755260.8445773308788893470. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012031412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-689864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-689864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-689864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_18_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:18:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-689864
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:18:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:20:45 +0000   Sat, 08 Nov 2025 10:19:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-689864
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                374121ba-37fd-4356-a88f-beebc6e065b5
	  Boot ID:                    cda8b985-1963-4204-9d3a-0a55097548b0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-5nhxx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-689864                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-c98xc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-689864             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-689864    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-lcscg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-689864             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fgx65              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j9bdq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m16s              kube-proxy       
	  Normal   Starting                 49s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m23s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m23s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m23s              kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m23s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s              node-controller  Node default-k8s-diff-port-689864 event: Registered Node default-k8s-diff-port-689864 in Controller
	  Normal   NodeReady                96s                kubelet          Node default-k8s-diff-port-689864 status is now: NodeReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-689864 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                node-controller  Node default-k8s-diff-port-689864 event: Registered Node default-k8s-diff-port-689864 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:57] overlayfs: idmapped layers are currently not supported
	[ +17.093807] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:58] overlayfs: idmapped layers are currently not supported
	[ +23.213006] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:59] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:00] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:08] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[  +4.059551] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[ +41.683316] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[ +26.370836] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[ +23.794161] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4b189591b949c1399a852982d38b83ef6f69386660f0ce7f89ebbac8ca01ebfe] <==
	{"level":"warn","ts":"2025-11-08T10:20:12.077492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.102139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.126261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.145435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.157653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.179201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.201285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.229632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.233569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.256984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.273298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.285810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.302733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.319899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.340366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.356496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.381018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.397005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.414343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.429130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.486835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.512981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.528411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.551579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:20:12.602688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:07 up  3:03,  0 user,  load average: 5.28, 4.52, 3.26
	Linux default-k8s-diff-port-689864 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0455a60ba551be5c0cb57017db7dd7feed4f40e8c8664e93b99577237ca69648] <==
	I1108 10:20:15.348157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:20:15.348439       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:20:15.349475       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:20:15.349524       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:20:15.349540       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:20:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:20:15.618306       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:20:15.618326       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:20:15.618334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:20:15.618646       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:20:45.625118       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:20:45.625300       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:20:45.625399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:20:45.625524       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:20:46.918519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:20:46.918550       1 metrics.go:72] Registering metrics
	I1108 10:20:46.918629       1 controller.go:711] "Syncing nftables rules"
	I1108 10:20:55.621915       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:20:55.621955       1 main.go:301] handling current node
	I1108 10:21:05.625518       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:21:05.625550       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c3023cf0ac48ce1231cf5627139c9c901b7e3a38e6a7f0dfb985a9bbc24f99e] <==
	I1108 10:20:14.431681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:20:14.468884       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:20:14.468993       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:20:14.469122       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:20:14.469168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:20:14.565424       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:20:14.565464       1 policy_source.go:240] refreshing policies
	I1108 10:20:14.572831       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:20:14.572851       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:20:14.572858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:20:14.572878       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:20:14.588038       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:20:14.588523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:20:14.632954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:20:14.839255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1108 10:20:14.874541       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:20:15.674836       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:20:15.948876       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:20:16.110515       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:20:16.170422       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:20:16.486913       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.126.65"}
	I1108 10:20:16.575796       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.186.239"}
	I1108 10:20:19.168600       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:20:19.303841       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:20:19.712597       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3c3f47aaf8c2bf2f806127afc4cef0f4e20c63bf1935191f5191a6f957bb90b2] <==
	I1108 10:20:19.125890       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:20:19.125997       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:20:19.128213       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:20:19.128335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:20:19.130470       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:20:19.133819       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:20:19.139104       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:20:19.139196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:20:19.142969       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:20:19.143083       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:20:19.148313       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:20:19.152861       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:20:19.153147       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:20:19.153250       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:20:19.153359       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-689864"
	I1108 10:20:19.154576       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:20:19.154141       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:20:19.154515       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:20:19.154123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:20:19.154972       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:20:19.155009       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:20:19.161342       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:20:19.173496       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:20:19.767556       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1108 10:20:19.767904       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [762f453d0ed140c7ed3168b3be237671651875c772656e7c8386789778118c3f] <==
	I1108 10:20:16.637154       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:20:16.973548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:20:17.080831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:20:17.081132       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:20:17.081225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:20:17.107880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:20:17.107936       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:20:17.117177       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:20:17.117520       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:20:17.117544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:20:17.118561       1 config.go:200] "Starting service config controller"
	I1108 10:20:17.118658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:20:17.131176       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:20:17.131269       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:20:17.131344       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:20:17.131376       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:20:17.132066       1 config.go:309] "Starting node config controller"
	I1108 10:20:17.137070       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:20:17.137167       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:20:17.219493       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:20:17.231846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:20:17.231881       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ae22b5caa485e158ab01e45cf711300c699f6058f50e6280baa756503407fde] <==
	I1108 10:20:12.337022       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:20:16.825041       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:20:16.826888       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:20:16.846106       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:20:16.846210       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:20:16.846240       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:20:16.846265       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:20:16.848900       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.849364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.849229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:20:16.849547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:20:16.947214       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:20:16.949520       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:20:16.949592       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.672293     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgbq\" (UniqueName: \"kubernetes.io/projected/bd217ef4-1a9e-491c-9bef-24b5cf18d140-kube-api-access-psgbq\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgx65\" (UID: \"bd217ef4-1a9e-491c-9bef-24b5cf18d140\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.672392     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd217ef4-1a9e-491c-9bef-24b5cf18d140-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgx65\" (UID: \"bd217ef4-1a9e-491c-9bef-24b5cf18d140\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.773657     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j9bdq\" (UID: \"0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq"
	Nov 08 10:20:19 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:19.773721     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2sb\" (UniqueName: \"kubernetes.io/projected/0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5-kube-api-access-4k2sb\") pod \"kubernetes-dashboard-855c9754f9-j9bdq\" (UID: \"0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq"
	Nov 08 10:20:20 default-k8s-diff-port-689864 kubelet[779]: W1108 10:20:20.591220     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c WatchSource:0}: Error finding container 9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c: Status 404 returned error can't find the container with id 9537bbde8749ab8596826ab5356c7aeed6d27a2019cabcee9ecf55a98faf595c
	Nov 08 10:20:20 default-k8s-diff-port-689864 kubelet[779]: W1108 10:20:20.636540     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48dfdc9a3efb02f7edb8f90210dcbc0406591d77352fb2108aecd188e615b47f/crio-3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f WatchSource:0}: Error finding container 3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f: Status 404 returned error can't find the container with id 3b9585af6566affcff431838e95a96f687a70159025fc09f34a208baeeaf8d8f
	Nov 08 10:20:26 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:26.551703     779 scope.go:117] "RemoveContainer" containerID="659287dbb35fb9f3c5f294b3d407ab68e0d692a3e3b495078099987b1f73ac69"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:27.559279     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:27.559576     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:27 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:27.570913     779 scope.go:117] "RemoveContainer" containerID="659287dbb35fb9f3c5f294b3d407ab68e0d692a3e3b495078099987b1f73ac69"
	Nov 08 10:20:28 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:28.564099     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:28 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:28.564247     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:30 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:30.543543     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:30 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:30.543707     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.273225     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.609869     779 scope.go:117] "RemoveContainer" containerID="40ede456e494a8e7e793335c5461039e7649c3a63334dc3625e186f53f1280ea"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.610490     779 scope.go:117] "RemoveContainer" containerID="bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:42.610685     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:20:42 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:42.632971     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j9bdq" podStartSLOduration=10.995161373 podStartE2EDuration="23.63293728s" podCreationTimestamp="2025-11-08 10:20:19 +0000 UTC" firstStartedPulling="2025-11-08 10:20:20.646818805 +0000 UTC m=+13.794495833" lastFinishedPulling="2025-11-08 10:20:33.284594712 +0000 UTC m=+26.432271740" observedRunningTime="2025-11-08 10:20:33.600059849 +0000 UTC m=+26.747736901" watchObservedRunningTime="2025-11-08 10:20:42.63293728 +0000 UTC m=+35.780614308"
	Nov 08 10:20:46 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:46.622790     779 scope.go:117] "RemoveContainer" containerID="f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73"
	Nov 08 10:20:50 default-k8s-diff-port-689864 kubelet[779]: I1108 10:20:50.543282     779 scope.go:117] "RemoveContainer" containerID="bb8f6efdfd72d470271b08d8a31ef27bfa54975f23060cafa4f9726a1bce850a"
	Nov 08 10:20:50 default-k8s-diff-port-689864 kubelet[779]: E1108 10:20:50.543481     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgx65_kubernetes-dashboard(bd217ef4-1a9e-491c-9bef-24b5cf18d140)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgx65" podUID="bd217ef4-1a9e-491c-9bef-24b5cf18d140"
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:21:01 default-k8s-diff-port-689864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [acb4867f3275ecac629838ded9af585b55ba0b90aec59c3613305b5f9f2c9d3d] <==
	2025/11/08 10:20:33 Using namespace: kubernetes-dashboard
	2025/11/08 10:20:33 Using in-cluster config to connect to apiserver
	2025/11/08 10:20:33 Using secret token for csrf signing
	2025/11/08 10:20:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:20:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:20:33 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:20:33 Generating JWE encryption key
	2025/11/08 10:20:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:20:34 Initializing JWE encryption key from synchronized object
	2025/11/08 10:20:34 Creating in-cluster Sidecar client
	2025/11/08 10:20:34 Serving insecurely on HTTP port: 9090
	2025/11/08 10:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:21:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:20:33 Starting overwatch
	
	
	==> storage-provisioner [10d2d6703c42d75f93836b575523fcac91738ba9405f01e757d0b1c5474c75a6] <==
	I1108 10:20:46.723168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:20:46.757909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:20:46.758102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:20:46.760809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:50.219856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:54.480598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:20:58.078930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:01.133432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.155672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.160587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:21:04.160747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:21:04.160906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96!
	I1108 10:21:04.161391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ef98997-9490-4868-b14f-87f19e537ac2", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96 became leader
	W1108 10:21:04.168949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:04.178611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:21:04.261887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-689864_da6876e1-3006-43de-9be2-123abc6bda96!
	W1108 10:21:06.182290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:21:06.193046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f4e51831398ac84ed17388fb9854f362cc97cdc451a2c0067f3ed3f0212bde73] <==
	I1108 10:20:16.292090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:20:46.296347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864: exit status 2 (382.563083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.06s)
E1108 10:27:04.275758  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 11.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.41
9 TestDownloadOnly/v1.28.0/DeleteAll 0.4
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 7.39
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.3
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 161.89
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.83
48 TestAddons/StoppedEnableDisable 12.56
49 TestCertOptions 49.58
50 TestCertExpiration 249.48
52 TestForceSystemdFlag 31.2
53 TestForceSystemdEnv 44.76
58 TestErrorSpam/setup 33.23
59 TestErrorSpam/start 0.83
60 TestErrorSpam/status 1.06
61 TestErrorSpam/pause 6.47
62 TestErrorSpam/unpause 5.91
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.63
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 30.19
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
75 TestFunctional/serial/CacheCmd/cache/add_local 1.12
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 59.71
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.56
87 TestFunctional/serial/InvalidService 4.74
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 10.28
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.06
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 23.5
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 2.1
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.2
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.85
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
121 TestFunctional/parallel/ImageCommands/Setup 0.71
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.39
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.53
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
150 TestFunctional/parallel/ProfileCmd/profile_list 0.43
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
152 TestFunctional/parallel/MountCmd/any-port 8.04
153 TestFunctional/parallel/MountCmd/specific-port 1.96
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.09
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 199.47
163 TestMultiControlPlane/serial/DeployApp 8.58
164 TestMultiControlPlane/serial/PingHostFromPods 1.46
165 TestMultiControlPlane/serial/AddWorkerNode 61.78
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.24
169 TestMultiControlPlane/serial/StopSecondaryNode 12.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 30.42
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.25
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.96
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.28
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.84
176 TestMultiControlPlane/serial/StopCluster 36.1
177 TestMultiControlPlane/serial/RestartCluster 83.3
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 49.1
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 79.95
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 41.58
211 TestKicCustomNetwork/use_default_bridge_network 36.39
212 TestKicExistingNetwork 34.7
213 TestKicCustomSubnet 33.05
214 TestKicStaticIP 40.01
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 75.91
219 TestMountStart/serial/StartWithMountFirst 10.25
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.14
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.97
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 133.76
231 TestMultiNode/serial/DeployApp2Nodes 5.8
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 58.87
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.74
236 TestMultiNode/serial/CopyFile 10.82
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 9.71
239 TestMultiNode/serial/RestartKeepsNodes 77.16
240 TestMultiNode/serial/DeleteNode 5.63
241 TestMultiNode/serial/StopMultiNode 24.08
242 TestMultiNode/serial/RestartMultiNode 48.1
243 TestMultiNode/serial/ValidateNameConflict 35.33
248 TestPreload 122.45
250 TestScheduledStopUnix 109.19
253 TestInsufficientStorage 13.15
254 TestRunningBinaryUpgrade 52.46
256 TestKubernetesUpgrade 354.33
257 TestMissingContainerUpgrade 110.21
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 45.72
261 TestNoKubernetes/serial/StartWithStopK8s 115.14
262 TestNoKubernetes/serial/Start 13.62
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
264 TestNoKubernetes/serial/ProfileList 35.5
265 TestNoKubernetes/serial/Stop 1.3
266 TestNoKubernetes/serial/StartNoArgs 6.85
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.71
269 TestStoppedBinaryUpgrade/Upgrade 52.74
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
279 TestPause/serial/Start 83.96
280 TestPause/serial/SecondStartNoReconfiguration 124.36
288 TestNetworkPlugins/group/false 3.63
294 TestStartStop/group/old-k8s-version/serial/FirstStart 65.27
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
297 TestStartStop/group/old-k8s-version/serial/Stop 12.04
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
299 TestStartStop/group/old-k8s-version/serial/SecondStart 54.31
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
305 TestStartStop/group/no-preload/serial/FirstStart 76.39
307 TestStartStop/group/embed-certs/serial/FirstStart 84.85
308 TestStartStop/group/no-preload/serial/DeployApp 9.31
310 TestStartStop/group/no-preload/serial/Stop 12.35
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 48.86
313 TestStartStop/group/embed-certs/serial/DeployApp 9.39
315 TestStartStop/group/embed-certs/serial/Stop 12.04
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 54.34
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.4
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
329 TestStartStop/group/newest-cni/serial/FirstStart 34.81
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.52
333 TestStartStop/group/newest-cni/serial/Stop 1.34
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
335 TestStartStop/group/newest-cni/serial/SecondStart 16.43
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.5
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.66
344 TestNetworkPlugins/group/auto/Start 86.22
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
349 TestNetworkPlugins/group/calico/Start 87.39
350 TestNetworkPlugins/group/auto/KubeletFlags 0.37
351 TestNetworkPlugins/group/auto/NetCatPod 12.4
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.22
354 TestNetworkPlugins/group/auto/HairPin 0.19
355 TestNetworkPlugins/group/custom-flannel/Start 60.7
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.4
358 TestNetworkPlugins/group/calico/NetCatPod 12.52
359 TestNetworkPlugins/group/calico/DNS 0.16
360 TestNetworkPlugins/group/calico/Localhost 0.14
361 TestNetworkPlugins/group/calico/HairPin 0.17
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
364 TestNetworkPlugins/group/kindnet/Start 88.61
365 TestNetworkPlugins/group/custom-flannel/DNS 0.22
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
368 TestNetworkPlugins/group/flannel/Start 65.49
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/ControllerPod 6
371 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
372 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
374 TestNetworkPlugins/group/flannel/NetCatPod 10.36
375 TestNetworkPlugins/group/kindnet/DNS 0.15
376 TestNetworkPlugins/group/kindnet/Localhost 0.14
377 TestNetworkPlugins/group/kindnet/HairPin 0.14
378 TestNetworkPlugins/group/flannel/DNS 0.18
379 TestNetworkPlugins/group/flannel/Localhost 0.14
380 TestNetworkPlugins/group/flannel/HairPin 0.14
381 TestNetworkPlugins/group/enable-default-cni/Start 57.07
382 TestNetworkPlugins/group/bridge/Start 74.14
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
389 TestNetworkPlugins/group/bridge/NetCatPod 12.39
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (11.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-636192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-636192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.620108356s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (11.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1108 09:13:04.549409  294085 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1108 09:13:04.549488  294085 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-636192
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-636192: exit status 85 (411.999421ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-636192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-636192 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:12:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:12:52.977014  294090 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:52.977122  294090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:52.977133  294090 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:52.977138  294090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:52.977431  294090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	W1108 09:12:52.977568  294090 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21866-292236/.minikube/config/config.json: open /home/jenkins/minikube-integration/21866-292236/.minikube/config/config.json: no such file or directory
	I1108 09:12:52.977964  294090 out.go:368] Setting JSON to true
	I1108 09:12:52.978796  294090 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6922,"bootTime":1762586251,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:12:52.978860  294090 start.go:143] virtualization:  
	I1108 09:12:52.982889  294090 out.go:99] [download-only-636192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1108 09:12:52.983066  294090 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 09:12:52.983183  294090 notify.go:221] Checking for updates...
	I1108 09:12:52.986699  294090 out.go:171] MINIKUBE_LOCATION=21866
	I1108 09:12:52.989767  294090 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:12:52.992608  294090 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:12:52.995591  294090 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:12:52.999658  294090 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 09:12:53.005646  294090 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:12:53.005963  294090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:12:53.040094  294090 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:12:53.040215  294090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:12:53.097607  294090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-08 09:12:53.088553442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:12:53.097713  294090 docker.go:319] overlay module found
	I1108 09:12:53.100681  294090 out.go:99] Using the docker driver based on user configuration
	I1108 09:12:53.100717  294090 start.go:309] selected driver: docker
	I1108 09:12:53.100723  294090 start.go:930] validating driver "docker" against <nil>
	I1108 09:12:53.100832  294090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:12:53.161806  294090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-08 09:12:53.153010562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:12:53.161969  294090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:12:53.162244  294090 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1108 09:12:53.162403  294090 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:12:53.165430  294090 out.go:171] Using Docker driver with root privileges
	I1108 09:12:53.168432  294090 cni.go:84] Creating CNI manager for ""
	I1108 09:12:53.168504  294090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:12:53.168518  294090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:12:53.168603  294090 start.go:353] cluster config:
	{Name:download-only-636192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-636192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:12:53.171524  294090 out.go:99] Starting "download-only-636192" primary control-plane node in "download-only-636192" cluster
	I1108 09:12:53.171549  294090 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:12:53.174381  294090 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:12:53.174413  294090 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:12:53.174521  294090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:12:53.192240  294090 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:12:53.193107  294090 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:12:53.193218  294090 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:12:53.234614  294090 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 09:12:53.234640  294090 cache.go:59] Caching tarball of preloaded images
	I1108 09:12:53.235482  294090 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:12:53.238825  294090 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1108 09:12:53.238861  294090 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1108 09:12:53.325731  294090 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1108 09:12:53.325904  294090 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 09:12:58.926838  294090 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 09:12:58.927305  294090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/download-only-636192/config.json ...
	I1108 09:12:58.927343  294090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/download-only-636192/config.json: {Name:mk3c8032aeb484fb3609c887efd40465f929ab93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:58.927531  294090 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:12:58.928356  294090 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-636192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-636192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-636192
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (7.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-209768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-209768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.392065266s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (7.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1108 09:13:12.910583  294085 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 09:13:12.910620  294085 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-209768
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-209768: exit status 85 (88.745293ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-636192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-636192 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ delete  │ -p download-only-636192                                                                                                                                                   │ download-only-636192 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
	│ start   │ -o=json --download-only -p download-only-209768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-209768 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:13:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:13:05.563058  294294 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:13:05.563220  294294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:05.563248  294294 out.go:374] Setting ErrFile to fd 2...
	I1108 09:13:05.563255  294294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:13:05.563590  294294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:13:05.564055  294294 out.go:368] Setting JSON to true
	I1108 09:13:05.564959  294294 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6935,"bootTime":1762586251,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:13:05.565037  294294 start.go:143] virtualization:  
	I1108 09:13:05.569181  294294 out.go:99] [download-only-209768] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:13:05.569415  294294 notify.go:221] Checking for updates...
	I1108 09:13:05.573035  294294 out.go:171] MINIKUBE_LOCATION=21866
	I1108 09:13:05.576733  294294 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:13:05.580169  294294 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:13:05.583594  294294 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:13:05.586917  294294 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 09:13:05.593480  294294 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:13:05.593809  294294 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:13:05.620552  294294 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:13:05.620673  294294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:05.676133  294294 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:05.666841894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:05.676243  294294 docker.go:319] overlay module found
	I1108 09:13:05.679441  294294 out.go:99] Using the docker driver based on user configuration
	I1108 09:13:05.679477  294294 start.go:309] selected driver: docker
	I1108 09:13:05.679484  294294 start.go:930] validating driver "docker" against <nil>
	I1108 09:13:05.679618  294294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:13:05.731472  294294 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:13:05.72162544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:13:05.731638  294294 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:13:05.731935  294294 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1108 09:13:05.732092  294294 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:13:05.735530  294294 out.go:171] Using Docker driver with root privileges
	I1108 09:13:05.738663  294294 cni.go:84] Creating CNI manager for ""
	I1108 09:13:05.738751  294294 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:13:05.738767  294294 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:13:05.738844  294294 start.go:353] cluster config:
	{Name:download-only-209768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-209768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:13:05.741870  294294 out.go:99] Starting "download-only-209768" primary control-plane node in "download-only-209768" cluster
	I1108 09:13:05.741902  294294 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:13:05.744880  294294 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:13:05.744986  294294 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:05.745044  294294 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:13:05.761817  294294 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:13:05.761939  294294 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:13:05.761965  294294 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:13:05.761972  294294 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:13:05.761984  294294 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:13:05.799200  294294 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:13:05.799237  294294 cache.go:59] Caching tarball of preloaded images
	I1108 09:13:05.799400  294294 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:13:05.802565  294294 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1108 09:13:05.802593  294294 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1108 09:13:05.890534  294294 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1108 09:13:05.890589  294294 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21866-292236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-209768 host does not exist
	  To start a cluster, run: "minikube start -p download-only-209768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-209768
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1108 09:13:14.189850  294085 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-382750 --alsologtostderr --binary-mirror http://127.0.0.1:38109 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-382750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-382750
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-461635
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-461635: exit status 85 (72.644204ms)

                                                
                                                
-- stdout --
	* Profile "addons-461635" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-461635"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-461635
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-461635: exit status 85 (79.229133ms)

                                                
                                                
-- stdout --
	* Profile "addons-461635" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-461635"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (161.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-461635 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-461635 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.891632382s)
--- PASS: TestAddons/Setup (161.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-461635 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-461635 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-461635 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-461635 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7823cd6a-4eb0-420d-b701-8acdbca2812c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7823cd6a-4eb0-420d-b701-8acdbca2812c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003930482s
addons_test.go:694: (dbg) Run:  kubectl --context addons-461635 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-461635 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-461635 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-461635 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-461635
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-461635: (12.257502704s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-461635
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-461635
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-461635
--- PASS: TestAddons/StoppedEnableDisable (12.56s)

                                                
                                    
x
+
TestCertOptions (49.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-916440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (46.579251904s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-916440 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-916440 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-916440 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-916440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-916440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-916440: (2.212420571s)
--- PASS: TestCertOptions (49.58s)

                                                
                                    
x
+
TestCertExpiration (249.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-328489 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (44.515555135s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-328489 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.0956775s)
helpers_test.go:175: Cleaning up "cert-expiration-328489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-328489
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-328489: (2.863502247s)
--- PASS: TestCertExpiration (249.48s)

                                                
                                    
x
+
TestForceSystemdFlag (31.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-283081 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1108 10:10:41.142372  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-283081 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.333081977s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-283081 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-283081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-283081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-283081: (2.570205269s)
--- PASS: TestForceSystemdFlag (31.20s)

                                                
                                    
x
+
TestForceSystemdEnv (44.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1108 10:11:26.426141  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-000082 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.946753116s)
helpers_test.go:175: Cleaning up "force-systemd-env-000082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-000082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-000082: (2.808875363s)
--- PASS: TestForceSystemdEnv (44.76s)

                                                
                                    
x
+
TestErrorSpam/setup (33.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-410248 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-410248 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-410248 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-410248 --driver=docker  --container-runtime=crio: (33.227374942s)
--- PASS: TestErrorSpam/setup (33.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (6.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause: exit status 80 (1.909710831s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:19:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause: exit status 80 (2.294339s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:19:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause: exit status 80 (2.26093056s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause: exit status 80 (1.67766688s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:20:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause: exit status 80 (2.340198103s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:20:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause: exit status 80 (1.894818639s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 stop: (1.309742131s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-410248 --log_dir /tmp/nospam-410248 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21866-292236/.minikube/files/etc/test/nested/copy/294085/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1108 09:20:58.072647  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.080475  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.092289  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.114097  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.155552  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.237036  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.398542  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:58.720318  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:20:59.362361  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:21:00.643943  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:21:03.205398  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:21:08.327749  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:21:18.569226  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-356848 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.627166527s)
--- PASS: TestFunctional/serial/StartWithProxy (81.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1108 09:21:33.173069  294085 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --alsologtostderr -v=8
E1108 09:21:39.050962  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-356848 --alsologtostderr -v=8: (30.193216283s)
functional_test.go:678: soft start took 30.193780482s for "functional-356848" cluster.
I1108 09:22:03.366571  294085 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-356848 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:3.1: (1.161391826s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:3.3: (1.165140563s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 cache add registry.k8s.io/pause:latest: (1.104455133s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-356848 /tmp/TestFunctionalserialCacheCmdcacheadd_local3460434626/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache add minikube-local-cache-test:functional-356848
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache delete minikube-local-cache-test:functional-356848
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-356848
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.849706ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 kubectl -- --context functional-356848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-356848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 09:22:20.013087  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-356848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.705070222s)
functional_test.go:776: restart took 59.705180131s for "functional-356848" cluster.
I1108 09:23:10.430998  294085 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (59.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-356848 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 logs: (1.484299537s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 logs --file /tmp/TestFunctionalserialLogsFileCmd1220569850/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 logs --file /tmp/TestFunctionalserialLogsFileCmd1220569850/001/logs.txt: (1.559869363s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-356848 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-356848
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-356848: exit status 115 (397.83685ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30217 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-356848 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-356848 delete -f testdata/invalidsvc.yaml: (1.084149391s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 config get cpus: exit status 14 (76.741407ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 config get cpus: exit status 14 (63.885038ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-356848 --alsologtostderr -v=1]
2025/11/08 09:33:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-356848 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 321686: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.306424ms)

                                                
                                                
-- stdout --
	* [functional-356848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:33:43.469157  321387 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:43.469280  321387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.469291  321387 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:43.469297  321387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.469556  321387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:33:43.469959  321387 out.go:368] Setting JSON to false
	I1108 09:33:43.470814  321387 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8173,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:33:43.470879  321387 start.go:143] virtualization:  
	I1108 09:33:43.474567  321387 out.go:179] * [functional-356848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:33:43.478360  321387 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:33:43.478460  321387 notify.go:221] Checking for updates...
	I1108 09:33:43.484319  321387 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:43.487222  321387 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:33:43.490274  321387 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:33:43.493263  321387 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:33:43.496140  321387 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:43.499949  321387 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:43.500566  321387 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:43.523488  321387 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:43.523601  321387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:43.585346  321387 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 09:33:43.574925049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:43.585454  321387 docker.go:319] overlay module found
	I1108 09:33:43.588594  321387 out.go:179] * Using the docker driver based on existing profile
	I1108 09:33:43.591537  321387 start.go:309] selected driver: docker
	I1108 09:33:43.591559  321387 start.go:930] validating driver "docker" against &{Name:functional-356848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-356848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:43.591663  321387 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:43.595296  321387 out.go:203] 
	W1108 09:33:43.598260  321387 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 09:33:43.601172  321387 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-356848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.345356ms)

                                                
                                                
-- stdout --
	* [functional-356848] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:33:43.943533  321506 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:43.943714  321506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.943745  321506 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:43.943766  321506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:43.944161  321506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:33:43.944578  321506 out.go:368] Setting JSON to false
	I1108 09:33:43.945593  321506 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8173,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 09:33:43.945701  321506 start.go:143] virtualization:  
	I1108 09:33:43.948793  321506 out.go:179] * [functional-356848] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1108 09:33:43.952628  321506 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:33:43.952634  321506 notify.go:221] Checking for updates...
	I1108 09:33:43.958484  321506 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:43.961210  321506 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 09:33:43.964088  321506 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 09:33:43.966904  321506 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:33:43.969712  321506 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:43.973115  321506 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:43.973724  321506 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:44.005593  321506 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:44.005714  321506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:44.077368  321506 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 09:33:44.066036982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:44.077499  321506 docker.go:319] overlay module found
	I1108 09:33:44.080722  321506 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1108 09:33:44.083682  321506 start.go:309] selected driver: docker
	I1108 09:33:44.083705  321506 start.go:930] validating driver "docker" against &{Name:functional-356848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-356848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:44.083905  321506 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:44.088382  321506 out.go:203] 
	W1108 09:33:44.091424  321506 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 09:33:44.094304  321506 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4f36fa27-df2e-4983-9bfd-8178932e4f39] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003147146s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-356848 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-356848 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-356848 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-356848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7448ed75-33a3-4e90-86db-e2971fe3288d] Pending
E1108 09:23:41.935442  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [7448ed75-33a3-4e90-86db-e2971fe3288d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7448ed75-33a3-4e90-86db-e2971fe3288d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003935157s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-356848 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-356848 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-356848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cd1f719a-6d86-4c41-8421-d207eb29056f] Pending
helpers_test.go:352: "sp-pod" [cd1f719a-6d86-4c41-8421-d207eb29056f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004084533s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-356848 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh -n functional-356848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cp functional-356848:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd731493601/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh -n functional-356848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh -n functional-356848 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294085/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /etc/test/nested/copy/294085/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294085.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /etc/ssl/certs/294085.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294085.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /usr/share/ca-certificates/294085.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2940852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /etc/ssl/certs/2940852.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2940852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /usr/share/ca-certificates/2940852.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-356848 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "sudo systemctl is-active docker": exit status 1 (398.131878ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "sudo systemctl is-active containerd": exit status 1 (405.247069ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-356848 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-356848 image ls --format short --alsologtostderr:
I1108 09:33:55.837297  322044 out.go:360] Setting OutFile to fd 1 ...
I1108 09:33:55.837440  322044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:55.837453  322044 out.go:374] Setting ErrFile to fd 2...
I1108 09:33:55.837476  322044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:55.838133  322044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
I1108 09:33:55.839020  322044 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:55.839234  322044 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:55.839955  322044 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
I1108 09:33:55.857861  322044 ssh_runner.go:195] Run: systemctl --version
I1108 09:33:55.857925  322044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
I1108 09:33:55.875797  322044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
I1108 09:33:55.980103  322044 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-356848 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ latest             │ 2d5a8f08b76da │ 176MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-356848  │ 9518a2b99278a │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-356848 image ls --format table --alsologtostderr:
I1108 09:34:00.787254  322590 out.go:360] Setting OutFile to fd 1 ...
I1108 09:34:00.787546  322590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:34:00.787603  322590 out.go:374] Setting ErrFile to fd 2...
I1108 09:34:00.787651  322590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:34:00.787953  322590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
I1108 09:34:00.788735  322590 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:34:00.791414  322590 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:34:00.791990  322590 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
I1108 09:34:00.829022  322590 ssh_runner.go:195] Run: systemctl --version
I1108 09:34:00.829073  322590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
I1108 09:34:00.863114  322590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
I1108 09:34:00.976542  322590 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-356848 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf23273908
83c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"b4d9c9f284a4b39d47d8a3144b62055298380a2dd195dc253063578187cf96ac","repoDigests":["docker.io/library/82c38d0cd48be186e6f67e694fe974224b4156528bd17408c6c5e91fbe2346ff-tmp@sha256:387cd10d752b994adfb6cea
1548427da0f231c17b188c7175b4547ffd9a2e15b"],"repoTags":[],"size":"1638178"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"s
ize":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006678"},{"id":"9518a2b99278a195dc4865e3f396f16986219ae66f7d06a3f12797063e2b7d7c","repoDigests":["localhost/my-image@sha256:26ee9f60ad974fc2930ef88b42f997d6dd83b09e5479960e507d6716e7a53ef7"],"repoTags":["localhost/my-image:functional-356848"],"size":"1640791"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898
a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"51
9884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa6
4dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-356848 image ls --format json --alsologtostderr:
I1108 09:34:00.462856  322486 out.go:360] Setting OutFile to fd 1 ...
I1108 09:34:00.463215  322486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:34:00.463338  322486 out.go:374] Setting ErrFile to fd 2...
I1108 09:34:00.463384  322486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:34:00.468745  322486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
I1108 09:34:00.469748  322486 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:34:00.470084  322486 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:34:00.470705  322486 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
I1108 09:34:00.495531  322486 ssh_runner.go:195] Run: systemctl --version
I1108 09:34:00.495696  322486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
I1108 09:34:00.516988  322486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
I1108 09:34:00.631858  322486 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-356848 image ls --format yaml --alsologtostderr:
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33
repoTags:
- docker.io/library/nginx:latest
size: "176006678"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-356848 image ls --format yaml --alsologtostderr:
I1108 09:33:56.075691  322080 out.go:360] Setting OutFile to fd 1 ...
I1108 09:33:56.075855  322080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:56.075885  322080 out.go:374] Setting ErrFile to fd 2...
I1108 09:33:56.075905  322080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:56.076174  322080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
I1108 09:33:56.076878  322080 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:56.077070  322080 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:56.077644  322080 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
I1108 09:33:56.096587  322080 ssh_runner.go:195] Run: systemctl --version
I1108 09:33:56.096650  322080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
I1108 09:33:56.115674  322080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
I1108 09:33:56.223395  322080 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh pgrep buildkitd: exit status 1 (280.235473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image build -t localhost/my-image:functional-356848 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-356848 image build -t localhost/my-image:functional-356848 testdata/build --alsologtostderr: (3.394365458s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-356848 image build -t localhost/my-image:functional-356848 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b4d9c9f284a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-356848
--> 9518a2b9927
Successfully tagged localhost/my-image:functional-356848
9518a2b99278a195dc4865e3f396f16986219ae66f7d06a3f12797063e2b7d7c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-356848 image build -t localhost/my-image:functional-356848 testdata/build --alsologtostderr:
I1108 09:33:56.589214  322180 out.go:360] Setting OutFile to fd 1 ...
I1108 09:33:56.590034  322180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:56.590089  322180 out.go:374] Setting ErrFile to fd 2...
I1108 09:33:56.590111  322180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:33:56.590405  322180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
I1108 09:33:56.591078  322180 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:56.591764  322180 config.go:182] Loaded profile config "functional-356848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:33:56.592416  322180 cli_runner.go:164] Run: docker container inspect functional-356848 --format={{.State.Status}}
I1108 09:33:56.610539  322180 ssh_runner.go:195] Run: systemctl --version
I1108 09:33:56.610594  322180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-356848
I1108 09:33:56.627582  322180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/functional-356848/id_rsa Username:docker}
I1108 09:33:56.735252  322180 build_images.go:162] Building image from path: /tmp/build.37882402.tar
I1108 09:33:56.735326  322180 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 09:33:56.743024  322180 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.37882402.tar
I1108 09:33:56.746705  322180 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.37882402.tar: stat -c "%s %y" /var/lib/minikube/build/build.37882402.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.37882402.tar': No such file or directory
I1108 09:33:56.746736  322180 ssh_runner.go:362] scp /tmp/build.37882402.tar --> /var/lib/minikube/build/build.37882402.tar (3072 bytes)
I1108 09:33:56.764372  322180 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.37882402
I1108 09:33:56.772421  322180 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.37882402 -xf /var/lib/minikube/build/build.37882402.tar
I1108 09:33:56.780256  322180 crio.go:315] Building image: /var/lib/minikube/build/build.37882402
I1108 09:33:56.780319  322180 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-356848 /var/lib/minikube/build/build.37882402 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1108 09:33:59.908601  322180 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-356848 /var/lib/minikube/build/build.37882402 --cgroup-manager=cgroupfs: (3.128256346s)
I1108 09:33:59.908677  322180 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.37882402
I1108 09:33:59.917642  322180 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.37882402.tar
I1108 09:33:59.925207  322180 build_images.go:218] Built localhost/my-image:functional-356848 from /tmp/build.37882402.tar
I1108 09:33:59.925242  322180 build_images.go:134] succeeded building to: functional-356848
I1108 09:33:59.925247  322180 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-356848
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image rm kicbase/echo-server:functional-356848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 317854: os: process already finished
helpers_test.go:519: unable to terminate pid 317723: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-356848 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c7f524f5-9117-4221-9834-7d3876da6d12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c7f524f5-9117-4221-9834-7d3876da6d12] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004631276s
I1108 09:23:36.041772  294085 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-356848 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.148.18 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-356848 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 service list -o json
functional_test.go:1504: Took "524.688145ms" to run "out/minikube-linux-arm64 -p functional-356848 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "370.791007ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.679205ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "361.861868ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.806815ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdany-port4230370768/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762594411328230940" to /tmp/TestFunctionalparallelMountCmdany-port4230370768/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762594411328230940" to /tmp/TestFunctionalparallelMountCmdany-port4230370768/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762594411328230940" to /tmp/TestFunctionalparallelMountCmdany-port4230370768/001/test-1762594411328230940
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.59675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:33:31.688101  294085 retry.go:31] will retry after 604.576676ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 09:33 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 09:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 09:33 test-1762594411328230940
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh cat /mount-9p/test-1762594411328230940
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-356848 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b4556f52-1746-4720-a4b9-cc36c5d38dc8] Pending
helpers_test.go:352: "busybox-mount" [b4556f52-1746-4720-a4b9-cc36c5d38dc8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b4556f52-1746-4720-a4b9-cc36c5d38dc8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b4556f52-1746-4720-a4b9-cc36c5d38dc8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003285138s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-356848 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdany-port4230370768/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdspecific-port2489306149/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.621163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:33:39.705258  294085 retry.go:31] will retry after 537.542323ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdspecific-port2489306149/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "sudo umount -f /mount-9p": exit status 1 (284.351894ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-356848 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdspecific-port2489306149/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T" /mount1: exit status 1 (564.6546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:33:41.901468  294085 retry.go:31] will retry after 617.662832ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-356848 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-356848 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-356848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3161106235/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-356848
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-356848
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-356848
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 09:35:58.069047  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:21.138838  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m18.587846323s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 kubectl -- rollout status deployment/busybox: (5.819074013s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-5q9m4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-99pn2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-hkw9n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-5q9m4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-99pn2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-hkw9n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-5q9m4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-99pn2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-hkw9n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-5q9m4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-5q9m4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-99pn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-99pn2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-hkw9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 kubectl -- exec busybox-7b57f96db7-hkw9n -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node add --alsologtostderr -v 5
E1108 09:38:23.360384  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.366873  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.378282  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.399696  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.441641  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.523125  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:23.684630  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:24.006300  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:24.648232  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:25.930430  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:28.492678  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:33.614008  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 node add --alsologtostderr -v 5: (1m0.683316099s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: (1.099633294s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-368582 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.091773856s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --output json --alsologtostderr -v 5: (1.086283296s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp testdata/cp-test.txt ha-368582:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile185331331/001/cp-test_ha-368582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582:/home/docker/cp-test.txt ha-368582-m02:/home/docker/cp-test_ha-368582_ha-368582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test_ha-368582_ha-368582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582:/home/docker/cp-test.txt ha-368582-m03:/home/docker/cp-test_ha-368582_ha-368582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test_ha-368582_ha-368582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582:/home/docker/cp-test.txt ha-368582-m04:/home/docker/cp-test_ha-368582_ha-368582-m04.txt
E1108 09:38:43.855832  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test_ha-368582_ha-368582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp testdata/cp-test.txt ha-368582-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile185331331/001/cp-test_ha-368582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m02:/home/docker/cp-test.txt ha-368582:/home/docker/cp-test_ha-368582-m02_ha-368582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test_ha-368582-m02_ha-368582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m02:/home/docker/cp-test.txt ha-368582-m03:/home/docker/cp-test_ha-368582-m02_ha-368582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test_ha-368582-m02_ha-368582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m02:/home/docker/cp-test.txt ha-368582-m04:/home/docker/cp-test_ha-368582-m02_ha-368582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test_ha-368582-m02_ha-368582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp testdata/cp-test.txt ha-368582-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile185331331/001/cp-test_ha-368582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m03:/home/docker/cp-test.txt ha-368582:/home/docker/cp-test_ha-368582-m03_ha-368582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test_ha-368582-m03_ha-368582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m03:/home/docker/cp-test.txt ha-368582-m02:/home/docker/cp-test_ha-368582-m03_ha-368582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test_ha-368582-m03_ha-368582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m03:/home/docker/cp-test.txt ha-368582-m04:/home/docker/cp-test_ha-368582-m03_ha-368582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test_ha-368582-m03_ha-368582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp testdata/cp-test.txt ha-368582-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile185331331/001/cp-test_ha-368582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m04:/home/docker/cp-test.txt ha-368582:/home/docker/cp-test_ha-368582-m04_ha-368582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582 "sudo cat /home/docker/cp-test_ha-368582-m04_ha-368582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m04:/home/docker/cp-test.txt ha-368582-m02:/home/docker/cp-test_ha-368582-m04_ha-368582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m02 "sudo cat /home/docker/cp-test_ha-368582-m04_ha-368582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 cp ha-368582-m04:/home/docker/cp-test.txt ha-368582-m03:/home/docker/cp-test_ha-368582-m04_ha-368582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 ssh -n ha-368582-m03 "sudo cat /home/docker/cp-test_ha-368582-m04_ha-368582-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node stop m02 --alsologtostderr -v 5
E1108 09:39:04.337758  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 node stop m02 --alsologtostderr -v 5: (12.065400867s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: exit status 7 (811.467128ms)

                                                
                                                
-- stdout --
	ha-368582
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-368582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-368582-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-368582-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:39:11.440553  337594 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:39:11.440730  337594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:11.440744  337594 out.go:374] Setting ErrFile to fd 2...
	I1108 09:39:11.440767  337594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:11.441187  337594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:39:11.441566  337594 out.go:368] Setting JSON to false
	I1108 09:39:11.441608  337594 mustload.go:66] Loading cluster: ha-368582
	I1108 09:39:11.442405  337594 config.go:182] Loaded profile config "ha-368582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:39:11.442425  337594 status.go:174] checking status of ha-368582 ...
	I1108 09:39:11.443243  337594 cli_runner.go:164] Run: docker container inspect ha-368582 --format={{.State.Status}}
	I1108 09:39:11.443906  337594 notify.go:221] Checking for updates...
	I1108 09:39:11.463600  337594 status.go:371] ha-368582 host status = "Running" (err=<nil>)
	I1108 09:39:11.463623  337594 host.go:66] Checking if "ha-368582" exists ...
	I1108 09:39:11.463945  337594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-368582
	I1108 09:39:11.498836  337594 host.go:66] Checking if "ha-368582" exists ...
	I1108 09:39:11.499183  337594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:39:11.499238  337594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-368582
	I1108 09:39:11.523459  337594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/ha-368582/id_rsa Username:docker}
	I1108 09:39:11.626609  337594 ssh_runner.go:195] Run: systemctl --version
	I1108 09:39:11.633383  337594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:39:11.646950  337594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:39:11.721603  337594 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-08 09:39:11.705880873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:39:11.722167  337594 kubeconfig.go:125] found "ha-368582" server: "https://192.168.49.254:8443"
	I1108 09:39:11.722207  337594 api_server.go:166] Checking apiserver status ...
	I1108 09:39:11.722253  337594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:39:11.737123  337594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	I1108 09:39:11.746709  337594 api_server.go:182] apiserver freezer: "6:freezer:/docker/c53969e6f3ea7ef1133aaba755aaacb6619cbc82fbdba8373c13775ac51e50a5/crio/crio-0007e15bafd39df9ead1ff8745d043007c388324d9e66e1364f6400bdf45974c"
	I1108 09:39:11.746786  337594 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c53969e6f3ea7ef1133aaba755aaacb6619cbc82fbdba8373c13775ac51e50a5/crio/crio-0007e15bafd39df9ead1ff8745d043007c388324d9e66e1364f6400bdf45974c/freezer.state
	I1108 09:39:11.755331  337594 api_server.go:204] freezer state: "THAWED"
	I1108 09:39:11.755356  337594 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:39:11.763591  337594 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:39:11.763618  337594 status.go:463] ha-368582 apiserver status = Running (err=<nil>)
	I1108 09:39:11.763630  337594 status.go:176] ha-368582 status: &{Name:ha-368582 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:39:11.763673  337594 status.go:174] checking status of ha-368582-m02 ...
	I1108 09:39:11.764004  337594 cli_runner.go:164] Run: docker container inspect ha-368582-m02 --format={{.State.Status}}
	I1108 09:39:11.784732  337594 status.go:371] ha-368582-m02 host status = "Stopped" (err=<nil>)
	I1108 09:39:11.784779  337594 status.go:384] host is not running, skipping remaining checks
	I1108 09:39:11.784788  337594 status.go:176] ha-368582-m02 status: &{Name:ha-368582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:39:11.784813  337594 status.go:174] checking status of ha-368582-m03 ...
	I1108 09:39:11.785207  337594 cli_runner.go:164] Run: docker container inspect ha-368582-m03 --format={{.State.Status}}
	I1108 09:39:11.802919  337594 status.go:371] ha-368582-m03 host status = "Running" (err=<nil>)
	I1108 09:39:11.802949  337594 host.go:66] Checking if "ha-368582-m03" exists ...
	I1108 09:39:11.803269  337594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-368582-m03
	I1108 09:39:11.824094  337594 host.go:66] Checking if "ha-368582-m03" exists ...
	I1108 09:39:11.824457  337594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:39:11.824506  337594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-368582-m03
	I1108 09:39:11.843187  337594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/ha-368582-m03/id_rsa Username:docker}
	I1108 09:39:11.950665  337594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:39:11.965101  337594 kubeconfig.go:125] found "ha-368582" server: "https://192.168.49.254:8443"
	I1108 09:39:11.965127  337594 api_server.go:166] Checking apiserver status ...
	I1108 09:39:11.965176  337594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:39:11.976390  337594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1200/cgroup
	I1108 09:39:11.984547  337594 api_server.go:182] apiserver freezer: "6:freezer:/docker/afd331fcf769357d72e80aed5c1d2c0d3f99a5a177bb48df734b9e516e1f956c/crio/crio-c4cc79b5b5b3f215fc6a56997d3398734e24c6de3bff2eb979eae4329638dafb"
	I1108 09:39:11.984620  337594 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/afd331fcf769357d72e80aed5c1d2c0d3f99a5a177bb48df734b9e516e1f956c/crio/crio-c4cc79b5b5b3f215fc6a56997d3398734e24c6de3bff2eb979eae4329638dafb/freezer.state
	I1108 09:39:11.994136  337594 api_server.go:204] freezer state: "THAWED"
	I1108 09:39:11.994162  337594 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:39:12.004716  337594 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:39:12.004755  337594 status.go:463] ha-368582-m03 apiserver status = Running (err=<nil>)
	I1108 09:39:12.004765  337594 status.go:176] ha-368582-m03 status: &{Name:ha-368582-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:39:12.004785  337594 status.go:174] checking status of ha-368582-m04 ...
	I1108 09:39:12.005207  337594 cli_runner.go:164] Run: docker container inspect ha-368582-m04 --format={{.State.Status}}
	I1108 09:39:12.023115  337594 status.go:371] ha-368582-m04 host status = "Running" (err=<nil>)
	I1108 09:39:12.023141  337594 host.go:66] Checking if "ha-368582-m04" exists ...
	I1108 09:39:12.023445  337594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-368582-m04
	I1108 09:39:12.050822  337594 host.go:66] Checking if "ha-368582-m04" exists ...
	I1108 09:39:12.051120  337594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:39:12.051171  337594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-368582-m04
	I1108 09:39:12.073240  337594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/ha-368582-m04/id_rsa Username:docker}
	I1108 09:39:12.178351  337594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:39:12.193954  337594 status.go:176] ha-368582-m04 status: &{Name:ha-368582-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 node start m02 --alsologtostderr -v 5: (28.924837445s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: (1.338789097s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.251219872s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 stop --alsologtostderr -v 5
E1108 09:39:45.300036  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 stop --alsologtostderr -v 5: (37.521870841s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 start --wait true --alsologtostderr -v 5
E1108 09:40:58.070332  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:07.221369  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 start --wait true --alsologtostderr -v 5: (1m26.259097178s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 node delete m03 --alsologtostderr -v 5: (11.018536799s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: (1.085366546s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 stop --alsologtostderr -v 5: (35.990142495s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: exit status 7 (114.505735ms)

                                                
                                                
-- stdout --
	ha-368582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-368582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-368582-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:42:37.827424  349659 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:42:37.827568  349659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:42:37.827581  349659 out.go:374] Setting ErrFile to fd 2...
	I1108 09:42:37.827586  349659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:42:37.827869  349659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:42:37.828085  349659 out.go:368] Setting JSON to false
	I1108 09:42:37.828139  349659 mustload.go:66] Loading cluster: ha-368582
	I1108 09:42:37.828216  349659 notify.go:221] Checking for updates...
	I1108 09:42:37.829568  349659 config.go:182] Loaded profile config "ha-368582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:42:37.829592  349659 status.go:174] checking status of ha-368582 ...
	I1108 09:42:37.830259  349659 cli_runner.go:164] Run: docker container inspect ha-368582 --format={{.State.Status}}
	I1108 09:42:37.848761  349659 status.go:371] ha-368582 host status = "Stopped" (err=<nil>)
	I1108 09:42:37.848784  349659 status.go:384] host is not running, skipping remaining checks
	I1108 09:42:37.848791  349659 status.go:176] ha-368582 status: &{Name:ha-368582 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:42:37.848819  349659 status.go:174] checking status of ha-368582-m02 ...
	I1108 09:42:37.849241  349659 cli_runner.go:164] Run: docker container inspect ha-368582-m02 --format={{.State.Status}}
	I1108 09:42:37.874462  349659 status.go:371] ha-368582-m02 host status = "Stopped" (err=<nil>)
	I1108 09:42:37.874487  349659 status.go:384] host is not running, skipping remaining checks
	I1108 09:42:37.874549  349659 status.go:176] ha-368582-m02 status: &{Name:ha-368582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:42:37.874575  349659 status.go:174] checking status of ha-368582-m04 ...
	I1108 09:42:37.874882  349659 cli_runner.go:164] Run: docker container inspect ha-368582-m04 --format={{.State.Status}}
	I1108 09:42:37.892830  349659 status.go:371] ha-368582-m04 host status = "Stopped" (err=<nil>)
	I1108 09:42:37.892854  349659 status.go:384] host is not running, skipping remaining checks
	I1108 09:42:37.892861  349659 status.go:176] ha-368582-m04 status: &{Name:ha-368582-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (83.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 09:43:23.359808  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:43:51.063011  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.9101399s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: (1.200856722s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (83.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (49.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 node add --control-plane --alsologtostderr -v 5: (47.946576347s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-368582 status --alsologtostderr -v 5: (1.151819166s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (49.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.107035441s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-269320 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1108 09:45:58.071332  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-269320 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.944913919s)
--- PASS: TestJSONOutput/start/Command (79.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-269320 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-269320 --output=json --user=testUser: (5.84658716s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-023180 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-023180 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.880876ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4b561e9-7b7b-4d5e-99ef-7b70f287525a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-023180] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6741ae52-9ee8-4103-8493-b814c02068be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21866"}}
	{"specversion":"1.0","id":"7c754409-3478-48dc-87a4-b9e2939009dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7eef5660-512c-4cce-a878-25e12c417944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig"}}
	{"specversion":"1.0","id":"4bf7da75-aae3-462e-b789-644eb706f3b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube"}}
	{"specversion":"1.0","id":"f442b9a3-c636-4cbe-9f60-65dac056ad34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5060dd8a-9919-404c-89d3-dbd2097c09a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5c63c08c-3700-474b-92c8-892337d533ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-023180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-023180
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-379294 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-379294 --network=: (39.308770646s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-379294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-379294
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-379294: (2.238809529s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.58s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-797107 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-797107 --network=bridge: (34.227044279s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-797107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-797107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-797107: (2.132594682s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.39s)

                                                
                                    
x
+
TestKicExistingNetwork (34.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1108 09:47:53.969404  294085 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1108 09:47:53.985898  294085 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1108 09:47:53.986762  294085 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1108 09:47:53.986808  294085 cli_runner.go:164] Run: docker network inspect existing-network
W1108 09:47:54.003594  294085 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1108 09:47:54.003643  294085 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1108 09:47:54.003664  294085 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1108 09:47:54.003801  294085 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1108 09:47:54.026174  294085 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a6819a8370f3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:c1:8b:c2:3d:13} reservation:<nil>}
I1108 09:47:54.026595  294085 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002a6240}
I1108 09:47:54.026621  294085 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1108 09:47:54.026671  294085 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1108 09:47:54.089070  294085 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-102897 --network=existing-network
E1108 09:48:23.364617  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-102897 --network=existing-network: (32.411199073s)
helpers_test.go:175: Cleaning up "existing-network-102897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-102897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-102897: (2.138275369s)
I1108 09:48:28.654986  294085 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.70s)

                                                
                                    
x
+
TestKicCustomSubnet (33.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-692063 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-692063 --subnet=192.168.60.0/24: (30.765550927s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-692063 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-692063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-692063
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-692063: (2.261837272s)
--- PASS: TestKicCustomSubnet (33.05s)

                                                
                                    
x
+
TestKicStaticIP (40.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-609789 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-609789 --static-ip=192.168.200.200: (37.595774431s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-609789 ip
helpers_test.go:175: Cleaning up "static-ip-609789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-609789
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-609789: (2.259455271s)
--- PASS: TestKicStaticIP (40.01s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-553077 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-553077 --driver=docker  --container-runtime=crio: (32.475642319s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-555661 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-555661 --driver=docker  --container-runtime=crio: (37.670070614s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-553077
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-555661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-555661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-555661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-555661: (2.272492849s)
helpers_test.go:175: Cleaning up "first-553077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-553077
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-553077: (2.058989253s)
--- PASS: TestMinikubeProfile (75.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-504219 --memory=3072 --mount-string /tmp/TestMountStartserial806906955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1108 09:50:58.069078  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-504219 --memory=3072 --mount-string /tmp/TestMountStartserial806906955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.248862008s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-504219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-506204 --memory=3072 --mount-string /tmp/TestMountStartserial806906955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-506204 --memory=3072 --mount-string /tmp/TestMountStartserial806906955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.140249631s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-506204 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-504219 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-504219 --alsologtostderr -v=5: (1.716524472s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-506204 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-506204
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-506204: (1.276203131s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-506204
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-506204: (6.971856633s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-506204 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-986617 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1108 09:53:23.360159  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-986617 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.237622069s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-986617 -- rollout status deployment/busybox: (4.054617089s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-dmx88 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-kzvz4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-dmx88 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-kzvz4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-dmx88 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-kzvz4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-dmx88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-dmx88 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-kzvz4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-986617 -- exec busybox-7b57f96db7-kzvz4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-986617 -v=5 --alsologtostderr
E1108 09:54:01.141045  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:54:46.424846  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-986617 -v=5 --alsologtostderr: (58.170819485s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-986617 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp testdata/cp-test.txt multinode-986617:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2506193327/001/cp-test_multinode-986617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617:/home/docker/cp-test.txt multinode-986617-m02:/home/docker/cp-test_multinode-986617_multinode-986617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test_multinode-986617_multinode-986617-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617:/home/docker/cp-test.txt multinode-986617-m03:/home/docker/cp-test_multinode-986617_multinode-986617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test_multinode-986617_multinode-986617-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp testdata/cp-test.txt multinode-986617-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2506193327/001/cp-test_multinode-986617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m02:/home/docker/cp-test.txt multinode-986617:/home/docker/cp-test_multinode-986617-m02_multinode-986617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test_multinode-986617-m02_multinode-986617.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m02:/home/docker/cp-test.txt multinode-986617-m03:/home/docker/cp-test_multinode-986617-m02_multinode-986617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test_multinode-986617-m02_multinode-986617-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp testdata/cp-test.txt multinode-986617-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2506193327/001/cp-test_multinode-986617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m03:/home/docker/cp-test.txt multinode-986617:/home/docker/cp-test_multinode-986617-m03_multinode-986617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617 "sudo cat /home/docker/cp-test_multinode-986617-m03_multinode-986617.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 cp multinode-986617-m03:/home/docker/cp-test.txt multinode-986617-m02:/home/docker/cp-test_multinode-986617-m03_multinode-986617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 ssh -n multinode-986617-m02 "sudo cat /home/docker/cp-test_multinode-986617-m03_multinode-986617-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-986617 node stop m03: (1.313597647s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-986617 status: exit status 7 (539.24825ms)

                                                
                                                
-- stdout --
	multinode-986617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-986617-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-986617-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr: exit status 7 (551.119485ms)

                                                
                                                
-- stdout --
	multinode-986617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-986617-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-986617-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:55:03.941624  399959 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:03.941749  399959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:03.941760  399959 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:03.941767  399959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:03.942142  399959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:55:03.942364  399959 out.go:368] Setting JSON to false
	I1108 09:55:03.942392  399959 mustload.go:66] Loading cluster: multinode-986617
	I1108 09:55:03.942911  399959 notify.go:221] Checking for updates...
	I1108 09:55:03.943484  399959 config.go:182] Loaded profile config "multinode-986617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:03.943533  399959 status.go:174] checking status of multinode-986617 ...
	I1108 09:55:03.944457  399959 cli_runner.go:164] Run: docker container inspect multinode-986617 --format={{.State.Status}}
	I1108 09:55:03.962670  399959 status.go:371] multinode-986617 host status = "Running" (err=<nil>)
	I1108 09:55:03.962692  399959 host.go:66] Checking if "multinode-986617" exists ...
	I1108 09:55:03.962990  399959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-986617
	I1108 09:55:03.993046  399959 host.go:66] Checking if "multinode-986617" exists ...
	I1108 09:55:03.993387  399959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:03.993441  399959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-986617
	I1108 09:55:04.026111  399959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/multinode-986617/id_rsa Username:docker}
	I1108 09:55:04.130598  399959 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:04.137242  399959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:04.150834  399959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:04.209247  399959 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 09:55:04.200083535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:55:04.209787  399959 kubeconfig.go:125] found "multinode-986617" server: "https://192.168.67.2:8443"
	I1108 09:55:04.209821  399959 api_server.go:166] Checking apiserver status ...
	I1108 09:55:04.209869  399959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:04.221550  399959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	I1108 09:55:04.230227  399959 api_server.go:182] apiserver freezer: "6:freezer:/docker/b8fa7d56c51b2e1e5ded5789518ffb514bc406cc4ca6313b7f2f675181d727d2/crio/crio-af8806cc7abfcbf3b3cc9f2e6f1b9280b0d9263a74e34c7db33b12ce29bf42ba"
	I1108 09:55:04.230302  399959 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8fa7d56c51b2e1e5ded5789518ffb514bc406cc4ca6313b7f2f675181d727d2/crio/crio-af8806cc7abfcbf3b3cc9f2e6f1b9280b0d9263a74e34c7db33b12ce29bf42ba/freezer.state
	I1108 09:55:04.238270  399959 api_server.go:204] freezer state: "THAWED"
	I1108 09:55:04.238297  399959 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1108 09:55:04.247481  399959 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1108 09:55:04.247513  399959 status.go:463] multinode-986617 apiserver status = Running (err=<nil>)
	I1108 09:55:04.247524  399959 status.go:176] multinode-986617 status: &{Name:multinode-986617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:55:04.247549  399959 status.go:174] checking status of multinode-986617-m02 ...
	I1108 09:55:04.247859  399959 cli_runner.go:164] Run: docker container inspect multinode-986617-m02 --format={{.State.Status}}
	I1108 09:55:04.265990  399959 status.go:371] multinode-986617-m02 host status = "Running" (err=<nil>)
	I1108 09:55:04.266018  399959 host.go:66] Checking if "multinode-986617-m02" exists ...
	I1108 09:55:04.266382  399959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-986617-m02
	I1108 09:55:04.283903  399959 host.go:66] Checking if "multinode-986617-m02" exists ...
	I1108 09:55:04.284223  399959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:04.284268  399959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-986617-m02
	I1108 09:55:04.302506  399959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21866-292236/.minikube/machines/multinode-986617-m02/id_rsa Username:docker}
	I1108 09:55:04.406191  399959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:04.418956  399959 status.go:176] multinode-986617-m02 status: &{Name:multinode-986617-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:55:04.418986  399959 status.go:174] checking status of multinode-986617-m03 ...
	I1108 09:55:04.419287  399959 cli_runner.go:164] Run: docker container inspect multinode-986617-m03 --format={{.State.Status}}
	I1108 09:55:04.436248  399959 status.go:371] multinode-986617-m03 host status = "Stopped" (err=<nil>)
	I1108 09:55:04.436268  399959 status.go:384] host is not running, skipping remaining checks
	I1108 09:55:04.436274  399959 status.go:176] multinode-986617-m03 status: &{Name:multinode-986617-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-986617 node start m03 -v=5 --alsologtostderr: (8.894231354s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-986617
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-986617
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-986617: (25.030233835s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-986617 --wait=true -v=5 --alsologtostderr
E1108 09:55:58.068704  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-986617 --wait=true -v=5 --alsologtostderr: (52.004392559s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-986617
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-986617 node delete m03: (4.926183639s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-986617 stop: (23.90000038s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-986617 status: exit status 7 (88.240933ms)

                                                
                                                
-- stdout --
	multinode-986617
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-986617-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr: exit status 7 (93.561869ms)

                                                
                                                
-- stdout --
	multinode-986617
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-986617-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:57:00.976808  407719 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:57:00.976972  407719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:57:00.976985  407719 out.go:374] Setting ErrFile to fd 2...
	I1108 09:57:00.976991  407719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:57:00.977284  407719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 09:57:00.977501  407719 out.go:368] Setting JSON to false
	I1108 09:57:00.977540  407719 mustload.go:66] Loading cluster: multinode-986617
	I1108 09:57:00.977627  407719 notify.go:221] Checking for updates...
	I1108 09:57:00.977974  407719 config.go:182] Loaded profile config "multinode-986617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:57:00.977993  407719 status.go:174] checking status of multinode-986617 ...
	I1108 09:57:00.978807  407719 cli_runner.go:164] Run: docker container inspect multinode-986617 --format={{.State.Status}}
	I1108 09:57:00.997951  407719 status.go:371] multinode-986617 host status = "Stopped" (err=<nil>)
	I1108 09:57:00.997974  407719 status.go:384] host is not running, skipping remaining checks
	I1108 09:57:00.997981  407719 status.go:176] multinode-986617 status: &{Name:multinode-986617 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:57:00.998011  407719 status.go:174] checking status of multinode-986617-m02 ...
	I1108 09:57:00.998311  407719 cli_runner.go:164] Run: docker container inspect multinode-986617-m02 --format={{.State.Status}}
	I1108 09:57:01.017979  407719 status.go:371] multinode-986617-m02 host status = "Stopped" (err=<nil>)
	I1108 09:57:01.017999  407719 status.go:384] host is not running, skipping remaining checks
	I1108 09:57:01.018006  407719 status.go:176] multinode-986617-m02 status: &{Name:multinode-986617-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-986617 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-986617 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.397009512s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-986617 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-986617
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-986617-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-986617-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.2364ms)

                                                
                                                
-- stdout --
	* [multinode-986617-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-986617-m02' is duplicated with machine name 'multinode-986617-m02' in profile 'multinode-986617'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-986617-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-986617-m03 --driver=docker  --container-runtime=crio: (32.722442852s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-986617
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-986617: exit status 80 (377.54805ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-986617 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-986617-m03 already exists in multinode-986617-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-986617-m03
E1108 09:58:23.360558  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-986617-m03: (2.081935423s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.33s)

                                                
                                    
x
+
TestPreload (122.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-971702 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-971702 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (59.4380097s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-971702 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-971702 image pull gcr.io/k8s-minikube/busybox: (2.380295883s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-971702
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-971702: (5.943567967s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-971702 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-971702 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.958503614s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-971702 image list
helpers_test.go:175: Cleaning up "test-preload-971702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-971702
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-971702: (2.486021574s)
--- PASS: TestPreload (122.45s)

                                                
                                    
x
+
TestScheduledStopUnix (109.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-722732 --memory=3072 --driver=docker  --container-runtime=crio
E1108 10:00:58.069080  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-722732 --memory=3072 --driver=docker  --container-runtime=crio: (32.593250432s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-722732 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-722732 -n scheduled-stop-722732
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-722732 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1108 10:01:04.355463  294085 retry.go:31] will retry after 104.665µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.356549  294085 retry.go:31] will retry after 212.76µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.357678  294085 retry.go:31] will retry after 305.36µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.358756  294085 retry.go:31] will retry after 214.983µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.359880  294085 retry.go:31] will retry after 584.52µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.361026  294085 retry.go:31] will retry after 688.196µs: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.363416  294085 retry.go:31] will retry after 1.561037ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.366090  294085 retry.go:31] will retry after 2.428811ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.369307  294085 retry.go:31] will retry after 2.619915ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.372533  294085 retry.go:31] will retry after 3.628245ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.377019  294085 retry.go:31] will retry after 8.548349ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.386847  294085 retry.go:31] will retry after 5.177247ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.393119  294085 retry.go:31] will retry after 10.983659ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.404447  294085 retry.go:31] will retry after 18.350564ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.423686  294085 retry.go:31] will retry after 32.840251ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
I1108 10:01:04.457159  294085 retry.go:31] will retry after 35.159863ms: open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/scheduled-stop-722732/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-722732 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-722732 -n scheduled-stop-722732
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-722732
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-722732 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-722732
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-722732: exit status 7 (77.624478ms)

                                                
                                                
-- stdout --
	scheduled-stop-722732
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-722732 -n scheduled-stop-722732
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-722732 -n scheduled-stop-722732: exit status 7 (72.226283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-722732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-722732
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-722732: (4.864482326s)
--- PASS: TestScheduledStopUnix (109.19s)

                                                
                                    
x
+
TestInsufficientStorage (13.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-694091 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-694091 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.53558728s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b859e901-1d8b-48f9-8dac-4587822d309a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-694091] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"712cd886-4359-4817-bb8e-fe5c49de46d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21866"}}
	{"specversion":"1.0","id":"a8eeb7f9-c084-4b40-ac60-faae25ca7dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c77ffbc-d828-4189-a9fa-3e48f42ec222","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig"}}
	{"specversion":"1.0","id":"b8e30e84-0e62-49c1-8486-587bfcdad133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube"}}
	{"specversion":"1.0","id":"d3af8770-7ce8-48b7-9b92-092895210d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"af1ca407-e908-46b7-9d99-3d98018156d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8af1959b-4a0e-4555-b0cc-14cb556c03cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c84637cf-1676-4cc3-931c-ca38d950133f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2741241c-5806-4b39-866e-fe65916754dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bad43c8-fba7-496d-a9a5-0c39d9c65721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9639fa26-5daf-46a9-8775-9da206744cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-694091\" primary control-plane node in \"insufficient-storage-694091\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"817d314e-6656-4cd5-9051-21036192026f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"58931e60-e6b9-4c44-9999-e456f95e0823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d27d38df-babd-45da-bd1d-beaffc22741f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-694091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-694091 --output=json --layout=cluster: exit status 7 (327.348416ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-694091","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-694091","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 10:02:31.260959  423931 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-694091" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-694091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-694091 --output=json --layout=cluster: exit status 7 (307.379064ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-694091","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-694091","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 10:02:31.576871  424000 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-694091" does not appear in /home/jenkins/minikube-integration/21866-292236/kubeconfig
	E1108 10:02:31.586802  424000 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/insufficient-storage-694091/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-694091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-694091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-694091: (1.980724642s)
--- PASS: TestInsufficientStorage (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1836873455 start -p running-upgrade-109069 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1836873455 start -p running-upgrade-109069 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.572948936s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-109069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-109069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.204187298s)
helpers_test.go:175: Cleaning up "running-upgrade-109069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-109069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-109069: (2.006638912s)
--- PASS: TestRunningBinaryUpgrade (52.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.478641271s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-144802
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-144802: (1.333723472s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-144802 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-144802 status --format={{.Host}}: exit status 7 (71.182342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.164452869s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-144802 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (104.946752ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-144802] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-144802
	    minikube start -p kubernetes-upgrade-144802 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1448022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-144802 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-144802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.991104523s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-144802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-144802
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-144802: (2.087475349s)
--- PASS: TestKubernetesUpgrade (354.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.838255514 start -p missing-upgrade-693631 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.838255514 start -p missing-upgrade-693631 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.238129264s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-693631
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-693631
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-693631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-693631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.507204762s)
helpers_test.go:175: Cleaning up "missing-upgrade-693631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-693631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-693631: (1.989724539s)
--- PASS: TestMissingContainerUpgrade (110.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (107.192792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-736823] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-736823 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-736823 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.178435791s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-736823 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (115.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1108 10:03:23.360307  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m51.831966116s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-736823 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-736823 status -o json: exit status 2 (486.447632ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-736823","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-736823
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-736823: (2.823405101s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (115.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-736823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.624205962s)
--- PASS: TestNoKubernetes/serial/Start (13.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-736823 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-736823 "sudo systemctl is-active --quiet service kubelet": exit status 1 (373.174472ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (35.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (16.190700215s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
E1108 10:05:58.068485  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (19.309355825s)
--- PASS: TestNoKubernetes/serial/ProfileList (35.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-736823
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-736823: (1.303251541s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-736823 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-736823 --driver=docker  --container-runtime=crio: (6.852402772s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-736823 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-736823 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.938443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3310715345 start -p stopped-upgrade-529544 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3310715345 start -p stopped-upgrade-529544 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.986159813s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3310715345 -p stopped-upgrade-529544 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3310715345 -p stopped-upgrade-529544 stop: (1.265196358s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-529544 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-529544 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.483919504s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-529544
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-529544: (1.303077004s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (83.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-585281 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1108 10:08:23.361069  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-585281 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.959990908s)
--- PASS: TestPause/serial/Start (83.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (124.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-585281 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-585281 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m4.316218132s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (124.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-099098 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-099098 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.121708ms)

                                                
                                                
-- stdout --
	* [false-099098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:10:52.792878  460419 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:10:52.793072  460419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:10:52.793088  460419 out.go:374] Setting ErrFile to fd 2...
	I1108 10:10:52.793094  460419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:10:52.793461  460419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-292236/.minikube/bin
	I1108 10:10:52.794440  460419 out.go:368] Setting JSON to false
	I1108 10:10:52.795558  460419 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10402,"bootTime":1762586251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 10:10:52.795667  460419 start.go:143] virtualization:  
	I1108 10:10:52.799350  460419 out.go:179] * [false-099098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:10:52.803234  460419 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 10:10:52.803323  460419 notify.go:221] Checking for updates...
	I1108 10:10:52.809147  460419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:10:52.812097  460419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-292236/kubeconfig
	I1108 10:10:52.814974  460419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-292236/.minikube
	I1108 10:10:52.817829  460419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:10:52.820723  460419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:10:52.824187  460419 config.go:182] Loaded profile config "pause-585281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:10:52.824288  460419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:10:52.846021  460419 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:10:52.846145  460419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:10:52.907836  460419 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:10:52.897831162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:10:52.907955  460419 docker.go:319] overlay module found
	I1108 10:10:52.911120  460419 out.go:179] * Using the docker driver based on user configuration
	I1108 10:10:52.914041  460419 start.go:309] selected driver: docker
	I1108 10:10:52.914062  460419 start.go:930] validating driver "docker" against <nil>
	I1108 10:10:52.914078  460419 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:10:52.917615  460419 out.go:203] 
	W1108 10:10:52.920487  460419 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 10:10:52.923356  460419 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-099098 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-585281
contexts:
- context:
cluster: pause-585281
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-585281
name: pause-585281
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-585281
user:
client-certificate: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt
client-key: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-099098

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-099098"

                                                
                                                
----------------------- debugLogs end: false-099098 [took: 3.288943622s] --------------------------------
helpers_test.go:175: Cleaning up "false-099098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-099098
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1108 10:13:23.360266  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m5.271071135s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-332573 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8ced1743-f6f3-4055-9ed3-c5f2125a022a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8ced1743-f6f3-4055-9ed3-c5f2125a022a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003192027s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-332573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-332573 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-332573 --alsologtostderr -v=3: (12.036065221s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573: exit status 7 (74.152275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-332573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-332573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.927321155s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-332573 -n old-k8s-version-332573
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xppkg" [daa8854a-6b69-46b9-8b93-303b0882bea4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003360749s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xppkg" [daa8854a-6b69-46b9-8b93-303b0882bea4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004322114s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-332573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-332573 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.385393582s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:15:58.068501  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.849030419s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-872727 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f23722ee-2a7d-4548-b3a6-705dd0782670] Pending
helpers_test.go:352: "busybox" [f23722ee-2a7d-4548-b3a6-705dd0782670] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f23722ee-2a7d-4548-b3a6-705dd0782670] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003662506s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-872727 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-872727 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-872727 --alsologtostderr -v=3: (12.345789425s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727: exit status 7 (78.031212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-872727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-872727 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.396116019s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-872727 -n no-preload-872727
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-606645 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fda9aeba-3ce7-41ea-9797-1de68d199925] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fda9aeba-3ce7-41ea-9797-1de68d199925] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003257865s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-606645 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-606645 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-606645 --alsologtostderr -v=3: (12.038771139s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645: exit status 7 (87.277368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-606645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-606645 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.970252783s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-606645 -n embed-certs-606645
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q4gsc" [819ad1c3-65e1-4aa1-9ef5-cc4151ca68be] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003912975s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q4gsc" [819ad1c3-65e1-4aa1-9ef5-cc4151ca68be] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003769561s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-872727 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-872727 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:18:23.360643  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/functional-356848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.397963287s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-chddn" [aae13813-227e-4300-9a66-f13600fe1537] Running
E1108 10:18:40.174790  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.181233  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.192646  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.214063  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.255415  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.336951  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.498669  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:40.820628  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:41.462228  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:18:42.743704  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003618134s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-chddn" [aae13813-227e-4300-9a66-f13600fe1537] Running
E1108 10:18:45.305465  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003686653s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-606645 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-606645 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:19:00.669095  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:19:21.150636  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (34.812714747s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78e08397-121e-44c5-9cc0-d303ab0890eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78e08397-121e-44c5-9cc0-d303ab0890eb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004680976s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-330758 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-330758 --alsologtostderr -v=3: (1.339476667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758: exit status 7 (75.722913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-330758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-330758 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.930829376s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-330758 -n newest-cni-330758
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-689864 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-689864 --alsologtostderr -v=3: (12.49761864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-330758 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864: exit status 7 (139.619648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-689864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689864 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.279998481s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689864 -n default-k8s-diff-port-689864
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.224528727s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j9bdq" [0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003247193s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j9bdq" [0c30c414-6cbf-4e5e-9bdf-1c3ec8be08e5] Running
E1108 10:20:58.068554  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003688932s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-689864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689864 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1108 10:21:24.033408  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m27.385747141s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-099098 "pgrep -a kubelet"
I1108 10:21:29.857996  294085 config.go:182] Loaded profile config "auto-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-099098 replace --force -f testdata/netcat-deployment.yaml
I1108 10:21:30.224870  294085 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l2zfz" [e11171e0-e352-42d1-b13e-0087a1868294] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l2zfz" [e11171e0-e352-42d1-b13e-0087a1868294] Running
E1108 10:21:36.573607  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.580033  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.591417  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.612845  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.654211  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.735606  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:36.897037  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:37.219056  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:37.861186  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:39.142627  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:21:41.704454  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004189134s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1108 10:22:17.551394  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.704382533s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wvprl" [7139acab-7236-4e07-ac5b-8bb9a1461b41] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003369322s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-099098 "pgrep -a kubelet"
I1108 10:22:44.782672  294085 config.go:182] Loaded profile config "calico-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-099098 replace --force -f testdata/netcat-deployment.yaml
I1108 10:22:45.289354  294085 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jbnsb" [24361e02-d30f-4235-9829-8905707b84b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jbnsb" [24361e02-d30f-4235-9829-8905707b84b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003252221s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-099098 "pgrep -a kubelet"
I1108 10:23:09.547808  294085 config.go:182] Loaded profile config "custom-flannel-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-099098 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rfvx4" [1621462f-d6a7-48f9-ba03-34e3da0fd1f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rfvx4" [1621462f-d6a7-48f9-ba03-34e3da0fd1f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004471773s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m28.614475209s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1108 10:24:07.874740  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/old-k8s-version-332573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:20.434429  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.755353  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.761711  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.773183  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.794632  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.835994  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:33.917443  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:34.079435  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:34.401609  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:35.042903  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:36.324507  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:38.885795  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:24:44.007103  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.491082482s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-grd8s" [45c79ca0-ae88-4997-9be8-cd54964adb92] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004221152s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dfdln" [2728b4eb-5995-47bf-afd3-13c2b40278ab] Running
E1108 10:24:54.249137  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003336545s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-099098 "pgrep -a kubelet"
I1108 10:24:56.303191  294085 config.go:182] Loaded profile config "kindnet-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-099098 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-87vmt" [5b6b38a1-8354-4635-a252-db410400082e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-87vmt" [5b6b38a1-8354-4635-a252-db410400082e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003429216s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-099098 "pgrep -a kubelet"
I1108 10:24:59.665821  294085 config.go:182] Loaded profile config "flannel-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-099098 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l9qsm" [8d16b774-b798-4602-ad04-8a602208f05a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l9qsm" [8d16b774-b798-4602-ad04-8a602208f05a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004155162s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (57.073021287s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1108 10:25:55.692969  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/default-k8s-diff-port-689864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:25:58.068137  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-099098 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.136756245s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-099098 "pgrep -a kubelet"
E1108 10:26:30.189342  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:30.195728  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:30.207152  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:30.229534  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1108 10:26:30.241656  294085 config.go:182] Loaded profile config "enable-default-cni-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-099098 replace --force -f testdata/netcat-deployment.yaml
E1108 10:26:30.271209  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:30.352586  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sfqjl" [eaf62922-8f8a-4142-bc37-8798f574bb0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 10:26:30.514506  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:30.836171  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:31.478461  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:32.760648  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sfqjl" [eaf62922-8f8a-4142-bc37-8798f574bb0c] Running
E1108 10:26:35.322017  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:36.573838  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/no-preload-872727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:26:40.443370  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/auto-099098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.0042175s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-099098 "pgrep -a kubelet"
I1108 10:26:52.806260  294085 config.go:182] Loaded profile config "bridge-099098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-099098 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5cb9d" [12ea2e55-a422-4d75-896e-abb4c3c2d077] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5cb9d" [12ea2e55-a422-4d75-896e-abb4c3c2d077] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003298167s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-099098 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-099098 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-036976 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-036976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-036976
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-708013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-708013
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-099098 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-585281
contexts:
- context:
cluster: pause-585281
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-585281
name: pause-585281
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-585281
user:
client-certificate: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt
client-key: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-099098

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-099098"

                                                
                                                
----------------------- debugLogs end: kubenet-099098 [took: 3.274845397s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-099098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-099098
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1108 10:10:58.068992  294085 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/addons-461635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-099098 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-099098" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-292236/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-585281
contexts:
- context:
cluster: pause-585281
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:08:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-585281
name: pause-585281
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-585281
user:
client-certificate: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.crt
client-key: /home/jenkins/minikube-integration/21866-292236/.minikube/profiles/pause-585281/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-099098

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-099098" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099098"

                                                
                                                
----------------------- debugLogs end: cilium-099098 [took: 3.61183082s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-099098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-099098
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard